JULY 22 2021 - RWE - Michele Jonnson Funk - FINAL
JULY 22 2021 - RWE - Michele Jonnson Funk - FINAL
JULY 22 2021 - RWE - Michele Jonnson Funk - FINAL
Evidence:
How Big Data is Changing
Scientific Standards
Michele Jonsson Funk, PhD
University of North Carolina at Chapel Hill
22 July 2021
Disclosures
Funding Support
• FDA (75F40119C10115, HHSF223201810183C)
• PCORI (CER‐2017C3‐9230)
• CDC (1U01DP006369, 1U01DD001231)
• NIH (NHLBI: R01 HL118255, NCATS: U54 TR002255, NIA: R01AG056479)
• HRSA (R40MC29455), Gillings Innovation Lab (GIL200811.0010)
• Center for Pharmacoepidemiology, Dept of Epidemiology, UNC Chapel Hill
• Center for Pharmacoepidemiology (Members: GlaxoSmithKline, Takeda, AbbVie, UCB,
Boehringer Ingelheim, Merck [past member]) provides salary support to MJF as Director.
• MJF serves on scientific advisory committee for GlaxoSmithKline on an unrelated product
2 with all honoraria paid to UNC.
2
Randomized controlled trials (RCTs)
• RCTs are designed to answer a specific question:
“Does this intervention reduce the risk of a specific health outcome when used
as indicated in a particular patient population?”
• In order to answer that question in support of a regulatory
submission, patients are selected to:
• Maximize background risk of the event of interest (to ensure enough events)
• Maximize adherence to protocol and complete follow‐up (maximize benefit of
drug, if present)
• Minimize any other health events that would prevent observing the main
outcome of interest
• Exclude patients in whom unknown risk of side effects is unacceptable
(pediatric patients, pregnant and nursing women)
3 3
9 Shortcomings of RCTs*
• Too Small • Too Stale
– to study rare outcomes – to provide relevant evidence comparing
• Too Simple treatment to contemporary alternatives
– to study interactions and treatments • Too Spendy
that change over time
– to assess many questions important to
• Too Selected public health
– to be generalizable to all patients who
will receive the treatment • Too Slow
• Too Specific – to identify potentially effective
treatments in a timely manner
– to assess all relevant health outcomes
• Too Short • Too Sample‐size hungry
– to study long‐term effects – to study treatments for very rare
conditions
*Individual limitations can be addressed, 4 4
but these are true of RCTs in general.
Non‐experimental studies of treatment effects
Large enough to study
• Variety of clinically relevant outcomes including
• rare but serious (e.g. anaphylaxis, TTS)
• lagged or long‐term effects (e.g. cancer)
• Variety of relevant comparators, treatment strategies
Can be used for very rare conditions (e.g. DIPG, MD)
Can include sufficient numbers of patients for valid inference among
• those with co‐morbidities and/or co‐medications
• diverse populations
• elderly, children, pregnant women
• broader set of indications (e.g., less severe disease)
Slide adapted from T Stürmer 5 5
When is RRRCT≠RRRWE and RRRWE is unbiased?
• Answering different questions
• Different estimands (aka causal contrast) (tx effect in treated vs. total popn)
• Heterogeneous treatment effects (HTE) and different populations
• If background rate of the outcome differs and true effect is null, HTE guaranteed on
absolute or relative scale
• Treatment itself is different
• RCT: 2 doses 28d (‐3 / +7) days apart
• RWE: 2 doses up to 8 weeks apart
• Adherence differs
• Same Rx, same population. In RCT, Rx provided for free. In RWE, Rx requires that
patient pay co‐pay (or out of pocket) which not all can afford. Due to cost differences
between tx of interest and alternatives, tx will appear less effective.
6
Take away #1
RCTs cannot and/or will not
be conducted to address all important questions
regarding the safety and effectiveness of
medical interventions.
7 7
Big Data and systematic error
Random error decreases
with increasing sample size.
Error
Systematic error
(aka bias) does not.
Sample Size
8 8
Adapted from K Rothman, 2002
Precision
• Increasing sample size generally
increases precision N=324,703 N=327
• Caveat: function of number of
‘outcomes’, not overall n
• Little random error
• Narrow confidence intervals
• “Highly significant” effects
• Not necessarily clinically
meaningful differences
• Not an indication of accuracy
10
10
Accuracy
• Absence of systematic bias
• Cannot be judged by the result itself*
• Based on adherence to shared standards for:
• Data quality
• Study design
• Rigorous analytic methods
*Recall: There are many reasons that the results from an RCT and
RWE may differ other than systematic bias in the RWE result.
11 11
Take away #2
Precision ≠ accuracy
Precision without accuracy
creates an illusion of certainty.
12 12
Threats to accuracy: systematic errors
1. Information bias
• Missing data
• Errors in the way data are recorded
2. Confounding
• Groups differ in their baseline risk of the outcome
3. Selection bias
• Process of selecting and following patients
13 13
Combating information bias
Randomized studies Non‐Randomized studies
• “Hard” outcomes (e.g. death)
• Blinding of patient and • Deep understanding of the underlying
treating physician to health system, data sources, and
assigned treatment performance of coding algorithms
• Independent
ascertainment of
• Use of appropriate analytic methods
outcomes for incomplete data (not just analyzing
the available complete observations)
14 14
Combating confounding
Non‐Randomized studies
Randomized studies • Ensure that all patients have similar
indication for treatment, are at a similar
• Randomizing large n disease stage, and are not too frail for tx
balances groups • Measure and appropriately adjust for risk
(on average) factors that are imbalanced between
• Comparison of measured groups
risk factors (Table 1) • Conduct sensitivity analyses targeted at
potential magnitude and direction of
uncontrolled confounding
15 15
Combating selection bias
Randomized studies
Non‐Randomized studies
• Near 100% follow‐up • Careful planning needed to handle
• Run‐in period to reduce
patients who change or stop treatment,
non‐compliance with
those who cannot be followed for the full
assigned tx
• Restricted to those FU time or die before they make it to the
without comorbid end of the planned follow‐up.
conditions • Beware crystal balls and time travel.
16 16
Real World Evidence (RWE)
Decision Making
Context
How will RWE be used & by whom?
• Individual patients or providers
• Payers
• Regulator(s)
• Health Systems
What are the consequences of
making a wrong or no decision?
How quickly is it needed?
17
Figure adapted from Duke Margolis white paper on RWE 17
https://healthpolicy.duke.edu/sites/default/files/atoms/files/rwe_white_paper_2017.09.06.pdf
Take away #3
Robust, fit‐for‐purpose RWE requires:
1. Deep understanding of data sources and context in which they
were generated.
2. Appropriate application of the tools and methods specific to real
world studies.
Both 1 & 2 take time.
18
Conclusions
• RWE should be used to address those questions that RCTs will not or
cannot answer.
• Given the large sample sizes involved and the modest magnitude of
important treatment effects, the potential impact of systematic errors
on study conclusions is larger relative to random error.
• Specialized training, knowledge and methods are needed to conduct
these studies in a way that is rigorous. These are distinct from the
training/methods for RCTs.
• Despite the time savings from using existing data, high quality RWE is
not a push button process.
19
Thank
You
Michele Jonsson Funk ꞏ [email protected]
Michele Jonsson Funk, PhD, FISPE
Associate Professor
Department of Epidemiology
University of North Carolina at Chapel Hill
[email protected]
Areas of expertise
• Evaluation of analytic methods to support unbiased and efficient estimation of
causal effects
• Design and analysis of non‐randomized treatments to evaluate safety and
effectiveness
• Use of linked EHR and claims data to generate RWE
Selected activities
• Director, Center for Pharmacoepidemiology, UNC Chapel Hill
• Working Group XIII, Real‐World Data and Real‐World Evidence, Council for
International Organizations of Medical Sciences (CIOMS)
• Principal Investigator, “Detailing and Evaluating Tools to Expose Confounded
Treatment Effects (DETECTe)”, FDA #75F40119C10115
• Board member representing Academia Americas, ISPE, 2017‐2020
25