0% found this document useful (0 votes)
6 views9 pages

Definition

Uploaded by

mehediahmd37
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views9 pages

Definition

Uploaded by

mehediahmd37
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 9

Question 01:

Definition:
A meta-analysis is a statistical analysis that combines the results of multiple scientific studies.
Meta-analyses can be performed when there are multiple scientific studies addressing the same
question, with each individual study reporting measurements that are expected to have some
degree of error. The aim then is to use approaches from statistics to derive a pooled estimate
closest to the unknown common truth based on how this error is perceived. Meta-analytic
results are considered the most trustworthy source of evidence by the evidence-based
medicine literature. [1][2]
History:
The historical roots of meta-analysis can be traced back to 17th century studies of astronomy
[3], while a paper published in 1904 by the statistician Karl Pearson in the British Medical
Journal which collated data from several studies of typhoid inoculation is seen as the first time
a meta-analytic approach was used to aggregate the outcomes of multiple clinical studies. The
first meta-analysis of all conceptually identical experiments concerning a particular research
issue, and conducted by independent researchers, has been identified as the 1940 book-length
publication Extrasensory Perception After Sixty Years, authored by Duke University
psychologists J. G. Pratt, J. B. Rhine, and associates.[4]This encompassed a review of 145
reports on ESP experiments published from 1882 to 1939, and included an estimate of the
influence of unpublished papers on the overall effect (the file-drawer problem). The term
"meta-analysis" was coined in 1976 by the statistician Gene V. Glass [5][6]who stated "my
major interest currently is in what we have come to call ...the meta-analysis of research. The
term is a bit grand, but it is precise and apt ... Meta-analysis refers to the analysis of analyses".
Although this led to him being widely recognized as the modern founder of the method, the
methodology behind what he termed "meta-analysis" predates his work by several decades.
The field of meta-analysis has expanded greatly since the 1970s and touches multiple
disciplines including psychology, medicine, and ecology.[7] Further the more recent creation of
evidence synthesis communities has increased the cross pollination of ideas, methods, and the
creation of software tools across disciplines.
Advantages:
1. Meta-analysis now offers the opportunity to critically evaluate and statistically combine
results of comparable studies or trials which is to increase the numbers of observations
and the statistical power, and to improve the estimates of the effect size of an
intervention or an association.[8]
2. Encourages designing of good trial and increases strength of conclusions.
3. Make the results fit for generalizing to a larger population.
4. Improves precision and accuracy of estimates through use of more data sets.
5. May increase statistical power to detect an effect.
6. ability to examine sources of variability over multiple studies while adjusting for common
measures shared by studies
7. provides one pooled outcome across a variety of studies.

Disadvantages:
1. Meta-analysis may discourage large definitive trials.
2. Increases tendency to unwittingly mix different trials and ignore differences.
3. Potential for tension between meta-analyst and conductors of original trials may
introduce biasness.
4. Meta-analysis of several small studies may not predict the results of a single large study.
5. Sources of bias are not controlled by the method.
6. A good meta-analysis of badly designed studies will still result in bad statistics.

Importances of Meta-Analysis:
1. The main importances of a synthesis is to understand the results of any study in the
context of all the other studies. First, we need to know whether or not the effect size is
consistent across the body of data. If it is consistent, then we want to estimate the
effect size as accurately as possible and to report that it is robust across the kinds of
studies included in the synthesis. On the other hand, if it varies substantially from study
to study, we want to quantify the extent of the variance and consider the implications.
Meta-analysis is able to address these issues whereas the narrative review is not. We
start with an example to show how meta-analysis and narrative review would approach
the same question, and then use this example to highlight the key differences between
the two.[9]
2. An overall mean effect and other significant statistics are produced by combining the
effects of all studies in a meta-analysis.
3. Someone would produce a narrative (qualitative) evaluation after several studies were
conducted in an effort to explain why the impact was/wasn't genuine in the research.
4. Therefore, the purpose of study is to determine an effect's size with sufficient accuracy.
5. Nevertheless, they have little trust in a single study of an impact, regardless of how
thorough and statistically significant it is.[10]

Application of Meta-Analysis:
1. As meta-analyses are used more frequently and their findings reach a wider scope of
people, it is the responsibility of researchers to use current guidelines and appropriately
apply their findings to form valid conclusions. As researchers gain experience with this
technique, we need to recognize that our methods may change over time. Meta-analysis
remains a valuable tool for examining controversies arising from conflicting studies.[11]
2. Meta-analysis can be done with single-subject design as well as group research designs.
[12]This is important because much research has been done with single-subject
research designs.[13] Considerable dispute exists for the most appropriate meta-
analytic technique for single subject research.
3. meta-analytic methods include the development and validation of clinical prediction
models, where meta-analysis may be used to combine individual participant data from
different research centers and to assess the model's generalizability,[14]or even to
aggregate existing prediction models.
4. Meta-analysis is used to assess a wide array of interventions and diseases ranging from
the phases of the moon and lunacy[9] to the effects of wine and chocolate on
cardiovascular disease and the effectiveness of probiotics for many diseases [for
example, antibiotic associated diarrhea (AAD), allergies, acute pediatric diarrhea, etc.
[15]

Question 02:

Odds Ratio:
An odds ratio (OR) is a statistic that quantifies the strength of the association between two
events, A and B. The odds ratio is defined as the ratio of the odds of A in the presence of B and
the odds of A in the absence of B, or equivalently (due to symmetry), the ratio of the odds of B
in the presence of A and the odds of B in the absence of A. Two events are independent if and
only if the OR equals 1, i.e., the odds of one event are the same in either the presence or
absence of the other event. If the OR is greater than 1, then A and B are associated (correlated)
in the sense that, compared to the absence of B, the presence of B raises the odds of A, and
symmetrically the presence of A raises the odds of B. Conversely, if the OR is less than 1, then A
and B are negatively correlated, and the presence of one event reduces the odds of the other
event.[16]
The computational formula for the odds ratio is :
AD
Odds Ratio =
BC
The log odds ratio is then,
log ⁡Odds Ratio =ln ⁡( OddsRatio )
1 1 1 1
with approximate variance,V LogOddsRatio = + + +
A B C D

Figure : Odds ratios are analyzed in log units

and approximate standard error, S ELogOddsRatio = √ V LogOddsRatio

Note that we do not compute a variance for the odds ratio. Rather, the log odds ratio and
its variance are used in the analysis to yield a summary effect, confidence limits, and so on,
in log units. We then convert each of these values back to odds ratios using,
OddsRatio ∧¿ exp ⁡(log ⁡ OddsRatio )
L LOddsRatio ∧¿ exp ⁡( L L LogOddsRatio )
U LOddsRatio ∧¿ exp ⁡( U LLogOddsRatio )

Risk Ratio:
The risk ratio is simply the ratio of two risks. Here, the risk of death in the treated group is
5/100 and the risk of death in the control group is 10/100, so the ratio of the two risks is
0.50. This index has the advantage of being intuitive, in the sense that the meaning of a
ratio is clear. For risk ratios, computations are carried out on a log scale. We compute the
log risk ratio, and the standard error of the log risk ratio, and will use these numbers to
perform all steps in the meta-analysis. Only then will we convert the results back into the
original metric.
The computational formula for the risk ratio is,
A /n1
Risk Ratio =
C /n 2

The log risk ratio is then, LogRiskRatio =ln ⁡( RiskRatio ),

with approximate variance,


1 1 1 1
V LogRiskRatio = − + −
A n 1 C n2

and approximate standard error,

Figure : Risk ratios are analyzed in log units

We do not compute a variance for the risk ratio in its original metric. Rather, we use the log risk
ratio and its variance in the analysis to yield a summary effect, confidence limits, and so on, in
log units. We then convert each of these values back to risk ratios using,
RISK DIFFERENCE
The risk difference is the difference between two risks. Here, the risk in the treated group is
0.05 and the risk in the control group is 0.10, so the risk difference is 0.05. Unlike the case for
risk ratios and for odds ratios, computations for risk differences are carried out in raw units
rather than log units. The risk difference is defined as,

with approximate variance,

and approximate standard error,

In the running example,

With variance,

and standard error,


Distinguishes between Odds ratio, Risk ratio & Risk difference:
1. To work with the risk ratio or odds ratio we transform all values to log values, perform
the analyses, and then convert the results back to ratio values for presentation. To work
with the risk difference we work with the raw values.[17]
2. The researcher must take into account both substantive and technical issues while
deciding between the risk ratio, odds ratio, and risk difference. Because they are relative
measurements, the risk ratio and odds ratio are frequently not very sensitive to
variations in baseline occurrences.
3. Some recommend utilizing the risk ratio (or odds ratio) to conduct the meta-analysis
and compute a summary risk (or odds) ratio since the ratios are less susceptible to
baseline risk while the risk difference is sometimes more clinically significant.They can
utilize this information to forecast the risk difference for any given baseline risk.

Mentioning the conditions where we can apply them(Odds ratio, Risk ratio, Risk difference):
1. If there's absolutely no difference between the groups in the probability of an outcome,
then both the OR and the RR are 1.0. That's the only situation in which they can be
exactly equal.
2. An RR (or OR) of 1.0 indicates that there is no difference in risk (or odds) between the
groups being compared. An RR (or OR) more than 1.0 indicates an increase in risk (or
odds) among the exposed compared to the unexposed, whereas a RR (or OR) <1.0
indicates a decrease in risk (or odds) in the exposed group. As for other summary
statistics, confidence intervals can be calculated for RR and OR.[18]
3. The risk difference focuses on absolute effect of the risk factor, or the excess risk of
disease in those who have the factor compared with those who don't.

Choosing Effect Size:


1. In contrast, because it is an absolute measurement, the risk difference is highly
dependent on the baseline risk. By using a ratio index, we would expect to see the same
effect size across studies even if the baseline risk varied from study to study. This is
because if we wanted to test a compound and thought it reduced the risk of an event by
20% regardless of the baseline risk, we would expect to see the same effect size.
2. In trials with a higher base rate, however, the risk difference would be greater.
3. We can compute the risk of an event (such as the risk of death) in each group (for example,
treated versus control). The difference in these risks then serves as an effect size (the risk
difference).[19]
4. The risk difference could be a better indicator of the treatment's therapeutic impact.
Consider that we do a meta-analysis to compare the risk of unfavorable occurrences for
treatment and control groups. A risk ratio of 2.00 means that the risk is 1/1000 for
treated patients and 1/2000 for control patients. At the same moment, the risk
difference is 0.0010 vs 0.0005, or 0.0005. Although these two figures (2.00 and 0.0005)
measure distinct things, they are both accurate.
5. At the same moment, the risk difference is 0.0010 vs 0.0005, or 0.0005. Although both
of these figures (2.00 and 0.0005) are accurate, they measure distinct things.
6. Some recommend performing the meta-analysis and computing a summary risk (or
odds) ratio using the risk ratio (or odds ratio) since the ratios are less susceptible to
baseline risk while the risk difference is occasionally more clinically significant. They can
utilize this information to forecast the risk difference for any given baseline risk.

References:
1. Herrera Ortiz AF., Cadavid Camacho E, Cubillos Rojas J, Cadavid Camacho T, Zoe Guevara S,
Tatiana Rincón Cuenca N, Vásquez Perdomo A, Del Castillo Herazo V, & Giraldo Malo R. A
Practical Guide to Perform a Systematic Literature Review and Meta-analysis. Principles and
Practice of Clinical Research. 2022;7(4):47–57. https://doi.org/10.21801/ppcrj.2021.74.6
2. "Levels of Evidence". Centre for Evidence-Based Medicine (CEBM). University of Oxford.
March 2009. Retrieved 21 December 2021.
3. Plackett RL (1958). "Studies in the History of Probability and Statistics: Vii. The Principle of the
Arithmetic Mean". Biometrika. 45 (1–2): 133. doi:10.1093/biomet/45.1-2.130.
4. Report on Certain Enteric Fever Inoculation Statistics". British Medical Journal. 2 (2288):
1243–1246. November 1904. doi:10.1136/bmj.2.2288.1243. PMC 2355479. PMID 20761760.
5. Cochran WG, Carroll SP (1953). "A Sampling Investigation of the Efficiency of Weighting
Inversely as the Estimated Variance". Biometrics. 9 (4): 447–459. doi:10.2307/3001436. JSTOR
3001436.
6. Hedges LV (September 2015). "The early history of meta-analysis". Research Synthesis
Methods. 6 (3): 284–286. doi:10.1002/jrsm.1149. PMID 26097046. S2CID 206155786.
7. Gurevitch J, Koricheva J, Nakagawa S, Stewart G (March 2018). "Meta-analysis and the
science of research synthesis". Nature. 555 (7695): 175–182. Bibcode:2018Natur.555..175G.
doi:10.1038/nature25753. PMID 29517004. S2CID 3761687.
8. Fagard, Robert H.; Staessen, Jan A.; Thijs, Lutgarde Advantages and disadvantages of the
meta-analysis approach, Journal of Hypertension: September 1996 - Volume 14 - Issue - p S9-
S13.
9. Statistics in Medicine 2002 vol 21, no 11: proceedings from the 3rd Symposium on Systematic Review
Methodology.

10. Borenstein, M., Hedges, L., Higgins, J., and Rothstein, H. R. (2009). Computing Effect Sizes for Meta-
analysis. Chichester: John Wiley and Sons, Ltd.

11. Gebele C, Tscheulin DK, Lindenmeier J, Drevs F, Seemann AK. Applying the concept of consumer
confusion to healthcare: development and validation of a patient confusion model. Health Serv Manage
Res. 2014;27:10-21.

12. Shadish, William R. (2014). "Analysis and meta-analysis of single-case designs: An introduction".
Journal of School Psychology. 52 (2): 109–122. doi:10.1016/j.jsp.2013.11.009. PMID 24606971.

13. Zelinsky, Nicole A. M.; Shadish, William (19 May 2018). "A demonstration of how to do a meta-
analysis that combines single-case designs with between-groups experiments: The effects of choice
making on challenging behaviors performed by people with disabilities". Developmental
Neurorehabilitation. 21 (4): 266–278.

14. Debray TP, Moons KG, Ahmed I, Koffijberg H, Riley RD (August 2013). "A framework for developing,
implementing, and evaluating clinical prediction models in an individual participant data meta-analysis".
Statistics in Medicine. 32 (18): 3158–3180. doi:10.1002/sim.5732. PMID 23307585. S2CID 25308961.

15. Szajewska H. Pooling data on different probiotics is not appropriate to assess the efficacy of
probiotics. Eur J Pediatr. 2014;173:975. [PubMed] [DOI] [Cited in This Article: 3] [Cited by in Crossref:
17] [Cited by in F6Publishing: 15] [Article Influence: 2.1] [Reference Citation Analysis (0)]

16. Szumilas, Magdalena (August 2010). "Explaining Odds Ratios". Journal of the Canadian Academy of
Child and Adolescent Psychiatry. 19 (3): 227–229. ISSN 1719-8429. PMC 2938757. PMID 20842279.

17. Whitehead, J. (1997). The Design and Analysis of Sequential Clinical Trials (rev. 2nd edn). NY:
Chichester, UK: John Wiley & Sons, Inc.

18. Ranganathan P, Aggarwal R, Pramesh CS. Common pitfalls in statistical analysis: Odds versus risk.
Perspect Clin Res. 2015 Oct-Dec;6(4):222-4. doi: 10.4103/2229-3485.167092. PMID: 26623395; PMCID:
PMC4640017.

19. van Houwelingen, H. C., Arends, L.R., Stijnen, T. (2002). Advanced methods in meta-analysis:
multivariate approach and meta-regression. Statistics in Medicine, 21: 589–624.

You might also like