0% found this document useful (0 votes)
25 views55 pages

CUR1 Module 4

This document discusses monitoring and evaluation of substance use prevention programs. It provides definitions of key terms like evaluation, research, efficacy, and effectiveness. It describes different research designs used in evaluation like experimental, quasi-experimental, and non-experimental designs. Threats to internal and external validity are also outlined. The goals are to define monitoring and evaluation terminology, understand research methods and designs, and learn how to evaluate programs for outcomes.

Uploaded by

ogikchoi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views55 pages

CUR1 Module 4

This document discusses monitoring and evaluation of substance use prevention programs. It provides definitions of key terms like evaluation, research, efficacy, and effectiveness. It describes different research designs used in evaluation like experimental, quasi-experimental, and non-experimental designs. Threats to internal and external validity are also outlined. The goals are to define monitoring and evaluation terminology, understand research methods and designs, and learn how to evaluate programs for outcomes.

Uploaded by

ogikchoi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 55

International Centre for Credentialing and Education of Addiction Professionals (ICCE)

The Universal Prevention Curriculum (UPC) for Substance Use Training Series

Curriculum 1
Introduction to Prevention Science

MODULE 4—INTRODUCTION TO
MONITORING AND EVALUATION: KEY
TO EVIDENCE-BASED PREVENTION

4.1
Introduction

4.2
Training Goals

 Provide definitions and terminology used in the


evaluation of substance use prevention
interventions
 Overview of research methods including:
 Research design
 Sampling
 Measurement and data collection methods
 Analysis
 Interpretation of research results

4.3
Learning Objectives

 Define key terminology associated with the


evaluation of substance use prevention
interventions
 Understand differences in research methods
including:
 Research designs
 Sampling methods
 Measurement and data collection methods
 Statistical approaches
 Interpretation of research findings
4.4
Terminology

 Evaluation and Research


 Monitoring and Evaluation
 Efficacy and Effectiveness
 Research Design
 Sampling
 Probability
 Measurement and Data Collection
 Validity
 Reliability
 Analysis
 Statistics
4.5
Evaluation and Research

4.6
Evaluation and Research

4.7
Definitions

 RESEARCH is “a systematic investigation, including


development, testing and evaluation, designed to develop or
contribute to generalizable knowledge” [US Federal
§45CFR46.102(d)]
 Research encompasses a range of ‘systematic
investigations’ including controlled laboratory studies, studies
of animals, studies in clinical settings, studies in the
community
 EVALUATION IS A TYPE OF RESEARCH
 EVALUATION is a systematic or structured way of
assessing the short- and long-term desired outcomes of
a prevention program and those factors that are related
to these outcomes 4.8
Purposes of Evaluation (1/2)

Level of impact/outcome:
 To what extent did the program achieve the
desired outcomes and were the level of these
outcomes significantly greater than if no
program were delivered
Reach:
 To what extent did the program achieve the
same outcomes for everyone who participated,
or only to certain groups among those who
participated in the program
4.9
Purposes of Evaluation (2/2)

Costs:
 To what extent did the benefits of the program
outweigh the costs of the program itself
Comparison:
 To what extent is one program more effective
than another holding costs constant

4.10
When to Conduct Evaluation? (1/2)

Planning a Assessing a Assessing a Assessing an


NEW DEVELOPING STABLE, MATURE Intervention after it
Intervention Intervention Intervention has ENDED

Conception Completion

The stage of program development influences the


reason for program evaluation.

4.11
When to Conduct Evaluation? (2/2)

Planning a Assessing a Assessing a Assessing an


NEW DEVELOPING STABLE, MATURE Intervention after it
Intervention Intervention Intervention has ENDED

Conception Completion

Efficacy Evaluation Effectiveness Evaluation

The stage of program development influences the


reason for program evaluation.
4.12
Evaluation Methods and
Intent

4.13
Efficacy and Effectiveness

 Efficacy is the extent to which an intervention


(technology, treatment, procedure, service, or
program) does more good than harm when
delivered under optimal conditions
 Effectiveness trials test whether interventions
are effective under “real-world” conditions or in
“natural” settings. Effectiveness trials may also
establish for whom, and under what conditions
of delivery, the intervention is effective

4.14
Evaluation Process

 Did the prevention intervention/policy achieve its


short-term outcome?
 Did the intervention/policy achieve its intended
effect(s) for the target population that received the
intervention—other important questions
 Was there differential responses by subgroup—gender,
ethnic group, substance use status?
 What intervention/policy characteristics were associated
with the outcomes that were achieved?
 To what extent was fidelity of delivery associated with
positive/negative outcomes?
4.15
Points to Consider in Conducting an
Evaluation

 What is the purpose of the evaluation?


 What is going to be evaluated?
 Who would be interested in the evaluation
outcomes and why?
 What is your time line? Is it realistic?
 What do you intend to do with the evaluation
results?
 What resources are available for the evaluation
(e.g., time, money, expertise)?
4.16
Evaluation System-Process Evaluation/
Monitoring and Outcome Evaluation

 An evaluation system generally includes two important


components
 Process evaluation or monitoring
 Outcome evaluation

 Process evaluation or monitoring addresses the questions:


 What did we do?
 How much did we do?
 Who participated?
 Who implemented the intervention/policy components?
 Was the intervention/policy implemented as intended?
 Outcome evaluation addresses the question:
 Did we achieve what we wanted to achieve with the intervention/policy
components?
4.17
Evaluation System and
Research Designs

4.18
Monitoring and Evaluation System

4.19
Introduction to Evaluation Design

An evaluation design is a guide for investigating a


question or hypothesis. It includes:
 Research questions or hypotheses
 Study type or research design
 Definition of the population to be studied
 Sampling method
 Variables and their measurement
 Data collection methodology
 Statistical analyses plan
4.20
Types of Research Designs Used in Evaluation

 Experimental or Quasi-Experimental Design


 Classical experimental design
 Times series or interrupted time series
 Non-experimental design
 One group pre-test and post-test

4.21
Concerns for Evaluation

 Internal Validity—Are the findings from the


evaluation of a prevention intervention really the
result of the participation or exposure to the
intervention or to something else
 External Validity—Can the findings from the
evaluation of a prevention intervention be
generalized to other situations and to other
populations

4.22
“Threats” to Internal Validity–Examples

 Maturation
 History
 Selection
 Testing
 Mortality
 Instrumentation

4.23
“Threats” to External Validity–Examples

 Situation: All situational specifics (e.g. intervention conditions, time,


location, lighting, noise, intervention administration, developer
involvement, timing, scope and extent of measurement, etc.)
 Pre-test effects: If cause-effect relationships can only be found
when pre-tests are carried out, then this also limits the
generalizability of the findings.
 Post-test effects: If cause-effect relationships can only be found
when post-tests are carried out, then this also limits the
generalizability of the findings
 Reactivity (placebo, novelty, and Hawthorne effects): If cause-effect
relationships are found they might not be generalizable to other
settings or situations if the effects found only occurred as an effect
of studying the situation
4.24
4.25
Classical Experimental Design (1/3)

Prevention
Intervention
X

4.26
Classical Experimental Design (2/3)

4.27
Classical Experimental Design (3/3)

 Strengths
 Clear causal inference
 Addresses issues of internal validity
 Weaknesses
 Expensive for large samples
 Difficult to manage
 Ethical issues
 Issues of external validity (generalization)

4.28
Interrupted Time Series Experimental Design
(1/2)

O1 O2 O3 O10 T
O11 O12 O”N”
E

E
Jan Feb Mar …Oct N
Nov Dec Jan and
beyond
TI

4.29
Interrupted Time Series Experimental Design
(2/2)

Alcohol Alcohol Alcohol Alcohol N Alcohol Alcohol Alcohol


related related related related T related related related
accident accident accident accident accident accident accident
E
O1 O2 O3 O4 O12 O13 O”N”
R

E
Jan Feb Mar …Oct N
Nov Dec Jan and
beyond
TI

4.30
Interrupted Time Series Experiment

 Strengths
 Strong causal inference
 Multiple data points allow for change over time
 Weaknesses
 Could be expensive
 Time consuming
 Subject to artifact changes in driving behavior

4.31
One Group Pre-test and Post-test (1/3)

Group 1 Pre-test Group 1 Post-test


O1 O2

4.32
One Group Pre-test and Post-test (2/3)

Juvenile Offenders Juvenile Offenders


Pre-test Post-test
O1 O2

4.33
One Group Pre-test and Post-test (3/3)

 Strengths
 Inexpensive
 Easy to Administer
 Weaknesses
 No comparison group- change could be due to one
or more sources other than the treatment (e.g.,
selection bias, history, maturation, instrumentation,
etc.)

4.34
Sampling and Measurement

4.35
Sampling (1/5)

 Sampling is a process through which a


subgroup of a larger population is selected for
study that is representative of the key
characteristics of that larger population
 Sampling is used when resources and workload
are constrained

4.36
Sampling (2/5)

30 Students 5 Students

4.37
Sampling (3/5)

4.38
Sampling (4/5)

4.39
Types of Sampling Methods (5/5)

 Probability
 Non-Probability

4.40
Measurement (1/6)

 Measurements in our everyday life


 Measurements depend on the quality of
measurement instruments
 Measurements must be carefully defined

4.41
Measurement (2/6)

 Reliability:Stability of our measurements when


measured over time
 Validity: Measurements truly reflect the attribute or
quality we want to measure
 Substance use prevention instrumentation:
 Substance Abuse and Mental Health Services Administration:
http://captus.samhsa.gov/sites/default/files/capt_resource/an
_annotatedbibliography_of_measurement_compendia_06-
22-12.pdf
 European Monitoring Centre on Drug and Drug Addiction:
http://www.emcdda.europa.eu/eib

4.42
Types of Measures (3/6)

 Quantitative data is described in numbers and


shows how often something occurs or to what
degree a phenomenon exists
 Qualitative data is described in words and
explains why people behave or feel the way they
do

Source: http://captus.samhsa.gov/prevention-practice/epidemiology-and
prevention/epidemiological-data/1 4.43
Quantitative Measures (4/6)

 Answers, “How many?” “How often?”


 Measures levels of behavior and trends
 Is objective, standardized, and easily analyzed
 Is easily comparable to similar data from other
communities and levels
 Examples: Statistics, survey data, records,
archival data

Source: http://captus.samhsa.gov/prevention-practice/epidemiology-and-prevention/ 4.44


epidemiological-data/1
Qualitative Measures (5/6)

 Answers, “Why?” “Why not?” or “What does it


mean?”
 Allows insight into behavior, trends, and
perceptions
 Is subjective and explanatory
 Helps interpret quantitative data, provides depth
of understanding
 Examples: Focus groups, key informant
interviews, case studies, story-telling,
observation 4.45
Data Collection, Analysis,
and Statistics

4.46
Measurement Collection Methods: Examples
(6/6)

 Quantitative Measures
 Archival sources
 Population-based surveys
 Household members
 Students
 Special Populations
 Qualitative Measures
 Key informant interviews
 Focus groups
 Ethnographic studies
 Open-ended questions
4.47
Analysis

 Measurements become data through their


transformation into a usable form such as
tables, scales, etc
 Data analysis allows the evaluator to
systematically describe the population and to
begin to answer the research questions

4.48
Statistical Analysis

 Statistics
 Descriptive
 Inferential

4.49
Review of Basic Descriptive Statistics

 Mean
 Median
 Mode and Range
 Variance and Standard Deviation
 Frequency Distributions
 Histograms

4.50
Review of Basic Descriptive Statistics: Mean,
Median, Mode and Range

4.51
Inferential Statistics

 Inferential statistics can be used to determine


associations between variables and predict the
likelihood of outcomes or events
 Inferential statistics rely on probabilities as to
whether the likelihood of a pattern of findings
could have occurred by chance
 If the probability of the occurrence of the
findings are lower or greater than by chance, the
finding is considered significant

4.52
Reporting Results

 Itisn’t enough to conduct the evaluation. A


complete evaluation includes reporting the
results
 Showing findings
 Presenting findings

4.53
4.54
Module 4 Evaluation
15 minutes
4.55

You might also like