Meta Analysis-1

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

Meta-Analysis

The first definition of meta-analysis was given by Gene Glass[1976] as “the statistical
analysis of a large collection of results from individual studies for the purpose of integrating
the findings”.
• Glass also called meta-analysis as “an analysis of analyses”. The Greek word “meta”
refers to “after” or beyond” and therefore meta-analysis go beyond individual studies.
Huque [1988] defined the term as “A statistical analysis that combines or integrates the
results of several independent clinical trials considered by the analyst to be combinable”.
• Historically, it was social scientists and statisticians in America who began to actively
develop methods that would deal with large volumes of data and quantitatively
synthesize them.
An important step in a systematic review is the thoughtful consideration of whether it is
appropriate to combine the numerical results of all, or perhaps some, of the studies. Such a
meta-analysis yields an overall statistic (together with its confidence interval) that
summarizes the effectiveness of an experimental intervention compared with a comparator
intervention. Potential advantages of meta-analyses include the following:
1. To improve precision. Many studies are too small to provide convincing evidence about
intervention effects in isolation. Estimation is usually improved when it is based on
more information.
2. To answer questions not posed by the individual studies. Primary studies often involve a
specific type of participant and explicitly defined interventions. A selection of studies in
which these characteristics differ can allow investigation of the consistency of effect
across a wider range of populations and interventions. It may also, if relevant, allow
reasons for differences in effect estimates to be investigated.
3. To settle controversies arising from apparently conflicting studies or to generate new
hypotheses. Statistical synthesis of findings allows the degree of conflict to be formally
assessed, and reasons for different results to be explored and quantified.

Principles of meta-analysis
The commonly used methods for meta-analysis follow the following basic principles:
1. Meta-analysis is typically a two-stage process. In the first stage, a summary statistic is
calculated for each study, to describe the observed intervention effect in the same way
for every study. For example, the summary statistic may be a risk ratio if the data are
dichotomous, or a difference between means if the data are continuous.
2. In the second stage, a summary (combined) intervention effect estimate is calculated
as a weighted average of the intervention effects estimated in the individual studies. A
weighted average is defined as

where Yi is the intervention effect estimated in the ith study, Wi is the weight given to
the ith study, and the summation is across all studies. Note that if all the weights are
the same then the weighted average is equal to the mean intervention effect. The
bigger the weight given to the ith study, the more it will contribute to the weighted
average.
3. The combination of intervention effect estimates across studies may optionally
incorporate an assumption that the studies are not all estimating the same intervention
effect, but estimate intervention effects that follow a distribution across studies. This is
the basis of a random-effects meta-analysis. Alternatively, if it is assumed that each
study is estimating exactly the same quantity, then a fixed-effect meta-analysis is
performed.
4. The standard error of the summary intervention effect can be used to derive a
confidence interval, which communicates the precision (or uncertainty) of the summary
estimate; and to derive a P value, which communicates the strength of the evidence
against the null hypothesis of no intervention effect.
5. As well as yielding a summary quantification of the intervention effect, all methods of
meta-analysis can incorporate an assessment of whether the variation among the
results of the separate studies is compatible with random variation, or whether it is
large enough to indicate inconsistency of intervention effects across studies.
6. The problem of missing data is one of the numerous practical considerations that must
be thought through when undertaking a meta-analysis. In particular, review authors
should consider the implications of missing outcome data from individual participants
(due to losses to follow-up or exclusions from analysis).
A generic inverse-variance approach to meta-analysis
A very common and simple version of the meta-analysis procedure is commonly
referred to as the inverse-variance method. This approach is implemented in its most basic
form in RevMan, and is used behind the scenes in many meta-analyses of both dichotomous
and continuous data.
The inverse-variance method is so named because the weight given to each study is
chosen to be the inverse of the variance of the effect estimate (i.e. 1 over the square of its
standard error). Thus, larger studies, which have smaller standard errors, are given more
weight than smaller studies, which have larger standard errors. This choice of weights
minimizes the imprecision (uncertainty) of the pooled effect estimate.
Performing inverse-variance meta-analyses
When the data are conveniently available as summary statistics from each intervention
group, the inverse-variance method can be implemented directly. For example, estimates and
their standard errors may be entered directly into RevMan under the ‘Generic inverse variance’
outcome type. For ratio measures of intervention effect, the data must be entered into RevMan
as natural logarithms (for example, as a log odds ratio and the standard error of the log odds
ratio). However, it is straightforward to instruct the software to display results on the original
(e.g. odds ratio) scale. It is possible to supplement or replace this with a column providing the
sample sizes in the two groups. Note that the ability to enter estimates and standard errors
creates a high degree of flexibility in meta-analysis. It facilitates the analysis of properly
analysed crossover trials, cluster-randomized trials and non-randomized trials , as well as
outcome data that are ordinal, time-to-event or rates.A fixed-effect meta-analysis using the
inverse-variance method calculates a weighted average as:

Where Yi is the intervention effect estimated in the ith study, SEi is the standard error of
that estimate, and the summation is across all studies. The basic data required for the analysis
are therefore an estimate of the intervention effect and its standard error from each study. A
fixed-effect meta-analysis is valid under an assumption that all effect estimates are estimating
the same underlying intervention effect, which is referred to variously as a ‘fixed-effect’
assumption, a ‘common-effect’ assumption or an ‘equal-effects’ assumption.

Random-effects methods for meta-analysis


A variation on the inverse-variance method is to incorporate an assumption that the
different studies are estimating different, yet related, intervention effects. This produces a
random-effects meta-analysis, and the simplest version is known as the DerSimonian and
Laird method

Performing inverse-variance meta-analyses


Most meta-analysis programs perform inverse-variance meta-analyses. Usually the user
provides summary data from each intervention arm of each study, such as a 2×2 table when
the outcome is dichotomous, or means, standard deviations and sample sizes for each group
when the outcome is continuous. This avoids the need for the author to calculate effect
estimates, and allows the use of methods targeted specifically at different types of data.
When the data are conveniently available as summary statistics from each intervention group,
the inverse-variance method can be implemented directly. For example, estimates and their
standard errors may be entered directly into RevMan under the ‘Generic inverse variance’
outcome type. For ratio measures of intervention effect, the data must be entered into RevMan
as natural logarithms (for example, as a log odds ratio and the standard error of the log odds
ratio). However, it is straightforward to instruct the software to display results on the original
(e.g. odds ratio) scale. It is possible to supplement or replace this with a column providing the
sample sizes in the two groups. Note that the ability to enter estimates and standard errors
creates a high degree of flexibility in meta-analysis. It facilitates the analysis of properly
analysed crossover trials, cluster-randomized trials and non-randomized trials, as well as
outcome data that are ordinal, time-to-event or rates.
Why a Meta-Analysis is Needed
There are several reasons why it is commonplace to find results of studies that are
asking similar research questions to be at variance with each other. This diversity that
inherently exists amongst studies is called heterogeneity.
These include-
1. Use of different case definitions for the disease under investigation [for instance,
bleeding due to warfarin in one study may include mild bleeds only while another study
may include hospitalizations and deaths due to bleeding which are severe events]
2. The study population may one from different parts of the same country or even from
different countries [this would be important in infectious diseases like malaria where
resistance patterns vary from country to country and within the same country]
3. The inclusion and exclusion criteria may vary and methodology to arrive at conclusions
may be different [for example, peripheral smear diagnosis of malaria in one study to
PCR based diagnosis in another]. One way of combining all studies on a particular
topic is the traditional narrative review. This review typically combines several studies
in a chronological discourse by an expert in that field. While the research question
itself for the review may be well thought through, these reviews tend to be largely
subjective and prone to bias as they are dependent upon the expert evaluating the
studies, quality of the search and number of studies identified therein. They are also
easier to carry out when the number of studies is not too many. Additional
disadvantages include different researchers coming to different conclusions and lack of
critical and in-depth analysis of each study included in the review. Meta-analyses, on
the other hand, offer the advantage of applying objective statistical criteria, including
addressing the variability between studies (heterogeneity) and thus can easily be done
with the ready to use software [combined with training] regardless of the number of
studies that need to be synthesized.
The Distinction between and a Systematic Review and a Meta-Analysis
A term that is often used alongside a meta-analysis is “systematic review”. The two
terms are also erroneously used as synonyms. A systematic review is a type of review which
answers a focused research question, and within which “meta-analyses” may or may not be a
part. Systematic reviews typically have a specifically formulated research question, a clear
search strategy, a pre-decided protocol that includes methods to identify which studies are to
be included [or excluded based on selection criteria], quality assessment of studies and
methods to analyze the ones included.
When systematic reviews contain a statistical synthesis of the included studies to
generate a single number [also called effect size], this becomes a meta-analysis. Thus,
systematic reviews can be standalone [without a meta-analysis] or include a meta-analytic
component.
In summary, a Meta-analysis refers to that portion of the systematic review that
involves the statistical analysis.

Steps in a Meta Analysis


A total of seven steps need to be followed while conducting a systematic review and/or
meta-analysis.
These include-
1. Formulating a research question
2. Writing the protocol and registering it in public domain
3. Identification of the studies using a clear and comprehensive search strategy
4. Selecting the right studies to be included [based on the protocol]
5. Data abstraction
6. Quality Assessment of included studies
7. Statistical analysis

Step 1- Formulating the Research Question


Perhaps the most important step of clinical research in general and meta-analysis in
particular, is to formulate the research question well. This is the uncertainty or lacuna that the
researcher is attempting to answer. Asking the right question will lead to the right study
design, an appropriate literature search strategy and statistical analysis that will generate the
right research evidence that is needed to drive practice decisions.
Thus, it ensures that the question will be answered in all likelihood. There are several
choices available for formulating a research question and these are given below given by
acronyms or mnemonics.
I. PICOT
A widely accepted and used acronym or mnemonic for formulating a research question
is PICO or PICO[T]. It stands for P- Patient or Problem or Population, I-Intervention, C-Control,
O-Outcome and T-Time.

It essentially involves breaking down the research question into five components that
ensures that the researcher and the reader are able to identify its’ individual elements.

II. ECLIPSE

This stands for Expectation [what does the search requester want the information for],
Client Group [for whom is the service intended], Location [where is the service physically
situated ] , Impact [what constitutes success and how is this measured?] Professionals [who
provide or improve this service], Service [Its nature outpatient / inpatients/day care only and
so on]. The mnemonic helps formulate research questions in the area of health policy
management.
For example, the Director of a major hospital may be interested in reducing the waiting
time for out-patients who visit his hospital.

The ECLIPSE for this study would be as follows


E- Reduce patient waiting time
C-All out-patients visiting the hospital
L-Hospital located at south end of the city
I-Impact- Reduction [by at least 15 minutes] in waiting time measured in minutes
P-All doctors or department/s who evaluate these out-patients
S- Outpatients who attend the hospital

SPIDER - Sample , Phenomenon , Design, Evaluation and Research type [largely for
qualitative research and/or mixed research methods

Step 2- Writing and registering the study protocol


The protocol for a systematic review and/or meta-analysis should clearly state the
rationale, objectives, search strategy, methods, end points and quality checks that would be
used.
The PRISMA [Preferred Reporting Items Systematic Reviews and Meta-Analysis]
guidelines recommend registration of the protocol à priori. Registration ensures that the
protocol [and the methodology within] is accessible to all [much like registration of clinical
trials before they are initiated]. For systematic reviews and/or meta-analysis done with and for
the Cochrane group, both the protocol and the systematic review [with or without a meta
analytic component] are available from the Cochrane Database of Systematic Reviews [CDSR].
In 2007, the Indian Council of Medical Research [ICMR] became the first low income
country to purchase national access for Indians with internet to the Cochrane Library through
an agreement with the publishing partner of The Cochrane Collaboration, John Wiley and Sons
Limited
Non-Cochrane systematic reviews and meta-analysis can be registered with PROSPERO
an international database that has been set up by the University of York and is free.

Step 3- Identification of the studies using a clear and comprehensive search strategy
The search strategy should be all encompassing and ensure that all relevant articles are
retrieved. Serious bias and erroneous conclusions may be drawn if the search strategy is poor.
As many databases as possible should be included with the search being tailored for each
individual database. Sensitivity of a strategy refers to identification of as many potentially
relevant articles as possible. Specificity refers to picking up the definitely relevant articles. All
search strategies should aim at maximizing sensitivity so as not to miss articles that are likely
to be relevant.
Commonly searched databases include National Library of Medicine [Medline], Experta
Medica Database [EMBASE], Biosciences Information Service [BIOSIS], Cumulative Index to
Nursing and Allied Health Literature [ C INAHL] , Health Services Technology, Administration
andResearch [HEALTHSTAR], and Cochrane’s central register of controlled trials.

Step 4- Selecting the right studies to be included narrowing the results of a search
strategy to a final number
The next step is to read the title and abstract of each reference obtained and eliminate
those that are not relevant. Subsequently, obtaining full texts of potentially relevant articles
[those likely to pass the selection criteria]. The focus while reading the full text should remain
on the methods and results section rather than the introduction.

Step 5- Data Extraction


Once the final list is ready, from each article, depending upon the protocol, we extract
the relevant information-case/disease definitions used, key variables, study design, outcome
measures, nature of participants; therapeutic area, year of publication; results; setting and so
on.
Step 6- Quality assessment of included studies
Once the number of studies to be included is firmed, it isimportant to assess their
quality. This is because a flawed study is in fact worse than no study at all. Several methods
are available to assess quality of studies, each with its own merits and demerits.
Step 7 - Statistical analysis of included studies
Understanding what Effect Size is one term that is frequently used in meta-analysis
[and subsequently used in this paper] is “effect size” which represents the basic unit of a
meta-analysis. The effect size can be expressed as any one of these metrics- odds ratio, risk
ratio, standardized mean difference, person time data and so on.
Statistical synthesis of data-Once data from all the shortlisted studies is ready, it is fed
into Revman. The two commonly used methods for analysis are Mantel- Haenszel [Fixed
effects model and DerSimonian Lard [Random effects model Both methods essentially provide
a single number or summary statistic along with 95% Confidence Intervals, which is the goal
of any meta-analysis.
Allocating weights to the different studies –As the ultimate goal of any meta-analysis is
to estimate one overall effect after pooling all the studies; one way of doing it is to simply add
all effect sizes and compute their mean. However, each study in a meta- analysis is actually
different from the other.
Fixed and random effects models-In the fixed effects model, assume that the effect size
in all included studies is identical and any difference between them is a result of differing
sample sizes and associated variability, and hence the term “fixed effects”. In the random
effects model on the other hand, assume that each study is unique and therefore will have its
own effect size. Here, unlike the fixed-effects model, the studies with smaller sample sizes are
not discounted by giving them lower weights as each study is special and is believed to make
an equally important contribution to the overall analysis. The random effects model is based on
the assumption that if a large number of studies for the same research question using the
preset selection criteria, the true effect sizes for all these studies would be distributed about “a”
mean. The studies included in the meta-analysis are believed to represent a “random” sample
from this larger number. Hence the term “random effects”. Thus, the weights allocated in the
random effects model are more balanced [relative to the fixed effects model]
Testing for Heterogeneity-An important issue in meta-analysis apart from looking at the
significance of treatment effects is to look at the extent to which studies included are similar to
[or dissimilar] to each other.
Two statistics are used to assess this- the Cochran’s Q [or the Q-test] and the I2 [I
square]. The former is a less used metric as it has poor power [ability to detect a difference]
when the number of studies is few. The I² statistic describes heterogeneity as a percentage.
For example, if the I2 value is 50%, it means that 50% of the variation across studies is a
result of heterogeneity and not chance. It is not dependent upon the number of studies and its
ease of use makes comprehension easier for clinicians. When testing for heterogeneity the null
hypothesis would state that there is no difference in effect size between the included studies.
The alternative hypothesis is that there is a difference in effect size across the studies.

Simplified Seven steps when performing a meta-analysis.

You might also like