0% found this document useful (0 votes)
101 views5 pages

Meta

Download as docx, pdf, or txt
Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1/ 5

Meta-analysis

A subset of systematic reviews; a method for systematically combining pertinent qualitative and
quantitative study data from several selected studies to develop a single conclusion that has
greater statistical power. This conclusion is statistically stronger than the analysis of any single
study, due to increased numbers of subjects, greater diversity among subjects, or accumulated
effects and results.

Meta-analysis would be used for the following purposes:

 To establish statistical significance with studies that have conflicting results


 To develop a more correct estimate of effect magnitude
 To provide a more complex analysis of harms, safety data, and benefits
 To examine subgroups with individual numbers that are not statistically significant

If the individual studies utilized randomized controlled trials (RCT), combining several selected
RCT results would be the highest-level of evidence on the evidence hierarchy, followed by
systematic reviews, which analyze all available studies on a topic.

Advantages

 Greater statistical power


 Confirmatory data analysis
 Greater ability to extrapolate to general population affected
 Considered an evidence-based resource

Steps in a Meta Analysis

A total of seven steps need to be followed while conducting a systematic review and/or meta-
analysis. These include-

1. Formulating a research question


2. Writing the protocol and registering it in public domain
3. Identification of the studies using a clear and comprehensive search strategy
4. Selecting the right studies to be included [based on the protocol]
5. Data abstraction
6. Quality Assessment of included studies
7. Statistical analysis [including generating the Forest plot]

Step 1- Formulating the Research Question Perhaps the most important step of clinical
research in general and meta-analysis in particular, is to formulate the research question
well. This is the uncertainty or gap that the researcher is attempting to answer. Asking the
right question will lead to the right study design, an appropriate literature search strategy
and statistical analysis that will generate the right research evidence that is needed to
drive practice decisions. Thus, it ensures that the question will be answered in all
likelihood. There are several choices available for formulating a research question and
these are given below given by acronyms or mnemonics.

Step 2- Writing and registering the study protocol

The protocol for a systematic review and/or meta-analysis should clearly state the rationale,
objectives, search strategy, methods, end points and quality checks that would be used. The
PRISMA [Preferred Reporting Items Systematic Reviews and Meta-Analysis] guidelines
recommend registration of the protocol à priori. Registration ensures that the protocol [and the
methodology within] is accessible to all [much like registration of clinical trials before they are
initiated] and will also prevent duplication by another author.

Step 3- Identification of the studies using a clear and comprehensive search strategy

The search strategy should be all encompassing and ensure that all relevant articles are retrieved.
Serious bias and erroneous conclusions may be drawn if the search strategy is poor. As many
databases as possible should be included with the search being tailored for each individual
database. Sensitivity of a strategy refers to identification of as many potentially relevant articles
as possible. Specificity refers to picking up the definitely relevant articles. All search strategies
should aim at maximizing sensitivity so as not to miss articles that are likely to be relevant.

Commonly searched databases include National Library of Medicine [Medline], Experta Medica
Database [EMBASE], Biosciences Information Service [BIOSIS], Cumulative Index to Nursing
and Allied Health Literature [CINAHL], Health Services Technology, Administration and
Research [HEALTHSTAR], and Cochrane's central register of controlled trials. Boolean
operators [AND, OR, NOT] should be used along with search terms to narrow or broaden the
search. All databases have filters [for example type of article, language of publication, dates of
publication, age of participants and so on] and these should be used to narrow down the results to
those articles likely to be relevant to the research question. In addition, the search should also
include evaluating the cross-references from the articles retrieved. Use of controlled vocabulary
[subject headings only] may result in a sub optimal yield. Therefore, uncontrolled vocabulary for
example, variations such as abbreviations, generic name, terms used internationally, differential
spellings used in another country and so on should also be used in the search.

Given that negative results are often not published, the search strategy should also include [to the
extent possible] unpublished data, thesis/project reports that may be available on Institutional or
University websites, conference proceedings and abstracts and telephonic/email contact with
trialists and experts in that field. Developing a search strategy is an iterative process- that is a
process of continual assessment and refinement

Citation managers- Once the results from the search are available, it is useful to export them into
a citation manager. The advantage of these is that as they are electronic, preclude manual errors,
eliminate duplicates, save time and also back up search results. Zotero and Mendeley are two
citation managers that available free for use. EndNote and RefWorks are paid software.18
Citation managers also incorporate an array of reference styles and in the event that the paper is
rejected by one journal, it is easy to change the formatting and style of referencing for another
journal.

Step 4- Selecting the right studies to be included - narrowing the results of a search strategy to a
final number

The next step is to read the title and abstract of each reference obtained and eliminate those that
are not relevant. Subsequently, we obtain full texts of potentially relevant articles [those likely to
pass the selection criteria]. The focus while reading the full text should remain on the methods
and results section rather than the Introduction.

Step 5- Data Extraction

Once the final list is ready, from each article, depending upon the protocol, we extract the
relevant information-case/disease definitions used, key variables, study design, outcome
measures, nature of participants; therapeutic area, year of publication; results; setting and so on.
These will now need to be fed into the software for analysis

Step 6- Quality assessment of included studies

Once the number of studies to be included is firmed, it is important to assess their quality. This is
because a flawed study is in fact worse than no study at all. Several methods are available to
assess quality of studies, each with its own merits and demerits. These include among others, the
Jadad score, the CONSORT statement, and the Cochrane Back Review Group criteria. We
describe the Jadad score here as an illustrative example. It is a 5 point score where one point
each is allocated to randomization, description of the method used for generating the random
sequence, whether or not blinded, description of the method used for blinding/masking and clear
cut information on drop outs and withdrawals. One point each is deducted if randomization is
described but the method there in is inappropriate and if blinding is described but again the
method for blinding is inappropriate [flawed]. Its strength lies in brevity and thereby ease of use.

Step 7 - Statistical analysis of included studies

Understanding what Effect Size is

One term that is frequently used in meta-analysis [and subsequently used in this paper] is "effect
size" which represents the basic unit of a meta-analysis. We have seen this earlier in the article
on sample size calculation. When we compare two interventions [say A and B], we are seeking
to find the difference between them. Meta analysis is also about A vs. B comparisons. Simply
put, the effect size is the difference between A and B and the "size" or "magnitude" of this
difference. This is a standardized metric that expresses the difference between two groups-
usually an experimental and a control group. The effect size can be expressed as any one of these
metrics- odds ratio, risk ratio, standardized mean difference, person time data and so on.
Statistical synthesis of data-Once data from all the shortlisted studies is ready, it is fed into
Revman (see later). The two commonly used methods for analysis are Mantel-Haenszel [Fixed
effects model, see below] and DerSimonian -Lard [Random effects model, Both methods
essentially provide a single number or summary statistic along with 95% Confidence Intervals,
which is the goal of any meta-analysis.

Allocating weights to the different studies As the ultimate goal of any meta-analysis is to
estimate one overall effect after pooling all the studies; one way of doing it is to simply add all
effect sizes and compute their mean. However, each study in a meta- analysis is actually
different from the other. Hence, we allocate a "weight" to each study- in other words, we give
more weight to some studies and less to the others and compute a "weighted mean". How do we
decide how much weight each study should get? This is driven by two key factors- the sample
size of the study [bigger the better] and the outcomes in each study [the more the better].

Fixed and random effects models-In the fixed effects model, we assume that the effect size in all
included studies is identical and any difference between them is a result of differing sample sizes
and associated variability, and hence the term "fixed effects". Thus, when we allocate weights to
the studies [see below], the studies with smaller sample sizes get a lower weight and the larger
studies a higher weight. In the random effects model on the other hand, we assume that each
study is unique and therefore will have its own effect size. Here, unlike the fixed-effects model,
the studies with smaller sample sizes are not discounted by giving them lower weights as each
study is special and is believed to make an equally important contribution to the overall analysis.
The random effects model is based on the assumption that if a large number of studies for the
same research question using the pre-set selection criteria, the true effect sizes for all these
studies would be distributed about "a" mean. The studies included in the meta-analysis are
believed to represent a "random" sample from this larger number. Hence the term "random
effects". Thus, the weights allocated in the random effects model are more balanced [relative to
the fixed effects model]

 The squares and the horizontal lines that cut the "squares" pertain to the summary
statistics of individual studies [risk ratio in this example] and the horizontal lines that run
through them indicate the 95% CI of the risk ratio
 The "diamond" This is located at the bottom of all studies. This could fall on either side
of the central line or fall in the middle and "cut' it. It represents the summation of all
studies and the horizontal edges of the diamond indicate the 95% CI of the summation. If
the diamond falls on the line it indicates no difference between the two groups. If it falls
on the left it favors the experimental intervention and if it falls on the right it favors the
control group.
 The lower left corner of the Forest plot- This gives the I2 statistic, the measure of
heterogeneity along with its p value. In this case the p=0.24 indicating a lack of
significant variance between the studies). This is followed by a second p value for the
effect size of this meta-analysis (in this case it is p<0.00001 which indicates there is a
significant difference between the two interventions studied. Note- the second p value
relates to the diamond that can fall on the central line or to its left or to its right.

You might also like