Unit six:
Evaluation of evidence
June, 2017
Hawassa, Ethiopia
1
Session objectives
Define validity
Define precision
Describe bias
Describe confounding
Describe chance
Differentiate random and systematic error
List Bradford-Hill criteria used to assess evidence for
strength of cause effect relationship
2
1. Accuracy of Measurement
Accuracy = Validity + Precision
Accuracy is a general term denoting the absence of error
of all kinds
Sources of error in measurement are classified as either
random or systematic
Random error as "that part of our experience that we
cannot predict" From a statistical perspective, random error
can also be conceptualized as sampling variability.
Systematic error, or bias, is a difference between an
observed value and the true value due to all causes other than
sampling variability
3
2. Validity
Validity is the extent to which data collected actually reflect
the truth.
The concepts of sensitivity (ability to detect true positive)
and specificity (ability to detect true negatives) can be used
to characterize the validity of a measure ("measurement
validity").
Study results are also described as "valid" when there is no
systematic misrepresentation of effect or "bias"
("validity in the estimation of effect").
Validity is often described as internal or external.
4
3. Precision
Precision, on the other hand, describes the extent to which
random error (i.e., sampling variation and the statistical
characteristics of the estimator) alters the measurement of
effects.
Precision in measurement and estimation corresponds to the
reduction of random error.
In epidemiology mostly related to sampling variation or
sampling error.
Random error can be reduced by increase sample size and
improving the efficiency of measurement
5
4. Internal Vs. External validity
Internal validity concerns the validity of inferences that
do not proceed beyond the target population for the
study. Internal validity is threatened when the investigator
does not have sufficient data to control or rule out competing
explanations for the results.
Bias
Confounding
Chance
External validity(generalizability): can we make inferences
beyond the subjects of the study?
6
5. Bias
Bias may be defined as any systematic error in an
epidemiologic study that results in an incorrect estimate
of the association between exposure and risk of
disease.
Bias may result from systematic error (or difference
between exposed and unexposed populations or between
cases and controls) in the collection, recording, analysis,
or interpretation of data.
Bias is an error that affects one group more than another
It could be intentional or unintentional
7
Bias…
Bias results in false understanding about differences
between groups and generates misleading patterns of
health problems.
For this reason, it is important to design and conduct studies
in such a way that every possibility for introducing bias has
been taken into account and to take steps to minimize
chances of bias.
8
6. Types of bias
1. Selection bias: refers to any error that arises in the
process of identifying the study populations.
Selection bias can occur whenever the identification of
individual subjects for inclusion in the study on the basis of
either exposure (cohort) or disease (case-control) status
depends in some way on the other axis of interest.
2. Observation or information bias: includes any
systematic error in the measurement of information on
exposure or outcome.
9
Examples of selection bias
Berkson's bias - Case-control studies carried out
exclusively in hospital settings are subject to selection bias
attributable to the fact that risks of hospitalization can
combine in patients who have more than one condition.
Ascertainment bias - Differential surveillance or diagnosis
of individuals make those exposed or those diseased
systematically more or less likely to be enrolled in a study.
10
Examples of selection bias…
Non-response bias - Rates of response to surveys and
questionnaires in many studies may also be related to
exposure status, so that bias is a reasonable alternative
explanation for an observed association between exposure
and disease.
Loss to follow-up - This is a major source of bias in cohort
studies. Persons lost to follow-up may differ from with
respect to both exposure and outcome, biasing any observed
association.
11
Examples of Information bias
Interviewer bias - This can occur if the interviewer or
examiner is aware of the disease status (in a case-
control study) or the exposure status (in cohort and
experimental studies).
This kind of bias may affect every kind of epidemiologic study.
Recall bias - May result because affected persons may be
more (or less) likely to recall an exposure that healthy
subjects, or exposed persons more (or less) likely to report
disease. This source of bias is more problematic in
retrospective cohort or case-control studies.
12
Examples of Information bias…
Social desirability bias - Occurs because subjects are
systematically more likely to provide a socially acceptable
response.
Placebo effect - In experimental studies which are not
placebo-controlled, observed changes may be ascribed to the
positive effect of the subject's belief that the
intervention will be beneficial.
13
Examples of Information bias…
Lead-time bias : The detection of a condition before the
person shows clinical signs and symptoms ( the "lead
time") is the cause of the measurement of prolonged
survival in persons who participate in screening programs
rather than a real prolongation of a real survival.
Those individual who are diagnosed early may actually gained
more "disease time".
14
Examples of Information bias…
Length/time bias - Occurs in studies of screening tests for
cancer.
This occurs due to the fact that screening tests for cancer
tend to detect more slow-growing tumours with a
better prognosis
Since faster growing tumours are more often detected
because they cause symptoms.
As a result, the mortality rate of cancers found on
screening will appear better than that of tumours not
found on screening (though the effect is not due to the
screening itself).
15
Recommendations to minimize bias at the
time of study design
1. Choose study design carefully
If ethical and feasible, a randomized double blind trial
has the least potential for bias.
If loss to follow-up will not be substantial, a prospective
cohort study may have less bias than a case-control
study.
Controls for case-control studies should be maximally
comparable to cases except for the variable under study.
16
Recommendations to minimize bias…
2. Choose "hard" (i.e., objective) rather than subjective
outcomes.
3. "blind" interviewers or examiners wherever possible.
4. Use well-defined criteria for identifying a "case" and
use closed-ended questions whenever possible.
17
7. Confounding
Confounding is the mixing of the effect of an extraneous
variable with the effects of the exposure and disease of
interest.
18
Characteristics of a confounding Variable
A confounding factor must be a risk factor for the
disease (in the absence of the exposure of interest)
A confounding factor must be associated with the study
exposure.
A confounding factor must not be affected by the exposure
or the disease (should not be a consequence of either
the exposure or disease).
19
Effect of Confounding
Totally or partially accounts for the apparent effect
Mask an underlying true association
Reverse the actual direction of the association
20
8. Chance
One of the alternative explanations to the observed
association between an exposure and a disease is chance
Evaluation of the role of chance is mainly the domain of
statistics and it involves:
Hypothesis Testing (Test of Statistical Significance) and
Estimation of Confidence Interval
21
Hypothesis Testing (Test of Statistical Significance)
Test of statistical significance quantifies the degree to which
sampling variability may account for the observed results.
The "P value" is used to indicate the probability or
likelihood of obtaining a result at least as extreme as that
observed in a study by chance alone, assuming that there
is truly no association between exposure and
outcome under consideration (i.e., H0 is true).
22
Hypothesis Testing…
For medical research, the P value < 0.05 is set
conventionally to indicate statistical significant.
A very small p-value means that you are very unlikely to
observe such an association if the null hypotheses is true.
23
Steps in testing for statistical significance
1. Make explicit statement of hypothesis
Null Hypothesis
H0:P1 = P2
Alternate Hypothesis
H1:P1 ≠ P2
2. Compute a measure of association - relative risk
or odd ratio.
3. Calculate chi-square statistical test of
significance.
24
Steps in testing for statistical significance…
4. For the value of chi-square calculated, look up its
corresponding p-value in the table of chi-squares.
If P-value ≤ 0.05 ---Chance is unlikely explanation
Reject the null hypothesis
There is statistically significant difference
25
Estimation of Confidence Interval
The confidence interval represents the range within
which the true magnitude of effect lies within a
certain degree of assurance.
It is more informative than just P value because it
reflects on both the size of the sample and the
magnitude of the effect.
26
Establishing Causal Association (causation)
In the absence of experimental evidence, the following
criteria (called the Bradford-Hill criteria) are used to assess
the strength of evidence for a cause-and-effect relationship.
1. Strength of the Association - The stronger the
association, the more likely that it is causal.
2. Consistency of the Relationship - The same
association should be demonstrable in studies with different
methods, conducted by different investigators, and in
different populations.
3. Specificity of the Association - The association is
more likely causal if a single exposure is linked to a single
27
disease.
Establishing Causal Association…
4. Temporal Relationship - The exposure to the factor
must precede the onset of the disease
5. Dose-response Relationship - The risk of disease often
increases with increasing exposure to a causal agent.
6. Experimental confirmation- Confirmation that the
risk of disease often increases with increasing exposure to a
causal agent by manipulating exposure level.
7. Biological Plausibility - The hypothesis for causation
should be coherent with what is known about the biology and
the descriptive epidemiology of the disease.
28
Summary
What is validity?
What alternative explanations do you know for the observed
findings?
How do you understand precision?
How do you assess the strength of evidence for a cause-and-
effect relationship?
29
Thank you!!!
30