What Is the Difference Between Systematic Review and Meta Analysis
Customs child health, public wellness, and epidemiology
Agreement systematic reviews and meta-analysis
Abstract
This review covers the bones principles of systematic reviews and meta-analyses. The problems associated with traditional narrative reviews are discussed, every bit is the office of systematic reviews in limiting bias associated with the assembly, disquisitional appraisal, and synthesis of studies addressing specific clinical questions. Important issues that demand to be considered when appraising a systematic review or meta-analysis are outlined, and some of the terms used in the reporting of systematic reviews and meta-analyses—such as odds ratio, relative risk, confidence interval, and the forest plot—are introduced.
- RCT, randomised controlled trial
- systematic review
- meta-assay
- narrative review
- disquisitional appraisal
Statistics from Altmetric.com
- RCT, randomised controlled trial
- systematic review
- meta-assay
- narrative review
- critical appraisal
Wellness care professionals are increasingly required to base their exercise on the all-time bachelor evidence. In the first commodity of the series, I described basic strategies that could be used to search the medical literature.1 Later a literature search on a specific clinical question, many articles may exist retrieved. The quality of the studies may be variable, and the individual studies might take produced conflicting results. It is therefore important that wellness care decisions are not based solely on one or 2 studies without account existence taken of the whole range of research information available on that topic.
Wellness intendance professionals have e'er used review manufactures as a source of summarised testify on a particular topic. Review articles in the medical literature accept traditionally been in the form of "narrative reviews" where experts in a particular field provide what is supposed to exist a "summary of bear witness" in that field. Narrative reviews, although notwithstanding very common in the medical field, have been criticised because of the loftier take chances of bias, and "systematic reviews" are preferred.2 Systematic reviews utilise scientific strategies in ways that limit bias to the associates, a critical appraisal, and synthesis of relevant studies that address a specific clinical question.2
THE PROBLEM WITH TRADITIONAL REVIEWS
The validity of a review article depends on its methodological quality. While traditional review manufactures or narrative reviews can exist useful when conducted properly, there is evidence that they are ordinarily of poor quality. Authors of narrative reviews ofttimes use informal, subjective methods to collect and interpret studies and tend to be selective in citing reports that reinforce their preconceived ideas or promote their own views on a topic.3, 4 They are as well rarely explicit about how they selected, assessed, and analysed the primary studies, thereby not allowing readers to assess potential bias in the review procedure. Narrative reviews are therefore frequently biased, and the recommendations made may be inappropriate.five
WHAT IS A SYSTEMATIC REVIEW?
In contrast to a narrative review, a systematic review is a course of research that provides a summary of medical reports on a specific clinical question, using explicit methods to search, critically appraise, and synthesise the world literature systematically.6 It is especially useful in bringing together a number of separately conducted studies, sometimes with conflicting findings, and synthesising their results.
By providing in a clear explicit mode a summary of all the studies addressing a specific clinical question,iv systematic reviews allow us to take account of the whole range of relevant findings from research on a particular topic, and non just the results of one or two studies. Other advantages of systematic reviews have been discussed by Mulrow.vii They can be used to establish whether scientific findings are consistent and generalisable across populations, settings, and treatment variations, or whether findings vary significantly by particular subgroups. Moreover, the explicit methods used in systematic reviews limit bias and, hopefully, will improve reliability and accuracy of conclusions. For these reasons, systematic reviews of randomised controlled trials (RCTs) are considered to be bear witness of the highest level in the hierarchy of research designs evaluating effectiveness of interventions.viii
METHODOLOGY OF A SYSTEMATIC REVIEW
The demand for rigour in the preparation of a systematic review ways that there should be a formal process for its conduct. Figure 1 summarises the process for conducting a systematic review of RCTs.ix This includes a comprehensive, exhaustive search for principal studies on a focused clinical question, selection of studies using clear and reproducible eligibility criteria, critical appraisal of primary studies for quality, and synthesis of results according to a predetermined and explicit method.iii, 9
WHAT IS A META-ANALYSIS?
Post-obit a systematic review, information from individual studies may be pooled quantitatively and reanalysed using established statistical methods.10 This technique is called meta-analysis. The rationale for a meta-assay is that, by combining the samples of the individual studies, the overall sample size is increased, thereby improving the statistical power of the analysis also as the precision of the estimates of handling effects.11
Meta-analysis is a 2 stage process.12 The get-go stage involves the calculation of a measure of treatment effect with its 95% confidence intervals (CI) for each private report. The summary statistics that are usually used to measure treatment issue include odds ratios (OR), relative risks (RR), and run a risk differences.
In the 2nd stage of meta-assay, an overall treatment consequence is calculated every bit a weighted average of the individual summary statistics. Readers should notation that, in meta-assay, data from the individual studies are not just combined as if they were from a single written report. Greater weights are given to the results from studies that provide more than data, because they are likely to be closer to the "truthful consequence" nosotros are trying to judge. The weights are oft the changed of the variance (the square of the standard error) of the handling effect, which relates closely to sample size.12 The typical graph for displaying the results of a meta-analysis is called a "wood plot".13
The woods plot
The plot shows, at a glance, information from the private studies that went into the meta-assay, and an estimate of the overall results. It also allows a visual assessment of the amount of variation between the results of the studies (heterogeneity). Figure 2 shows a typical forest plot. This effigy is adjusted from a recent systematic review and meta-analysis which examined the efficacy of probiotics compared with placebo in the prevention and treatment of diarrhoea associated with the utilise of antibiotics.14
Clarification of the woods plot
In the forest plot shown in fig 2, the results of 9 studies take been pooled. The names on the left of the plot are the first authors of the chief studies included. The black squares correspond the odds ratios of the individual studies, and the horizontal lines their 95% confidence intervals. The area of the black squares reflects the weight each trial contributes in the meta-analysis. The 95% confidence intervals would contain the true underlying consequence in 95% of the occasions if the study was repeated once more and again. The solid vertical line corresponds to no consequence of treatment (OR = 1.0). If the CI includes ane, then the divergence in the consequence of experimental and control treatment is not meaning at conventional levels (p>0.05).15 The overall treatment upshot (calculated as a weighted boilerplate of the private ORs) from the meta-analysis and its CI is at the lesser and represented every bit a diamond. The centre of the diamond represents the combined treatment consequence (0.37), and the horizontal tips represent the 95% CI (0.26 to 0.52). If the diamond shape is on the Left of the line of no outcome, and then Less (fewer episodes) of the consequence of interest is seen in the treatment grouping. If the diamond shape is on the Right of the line, so moRdue east episodes of the outcome of interest are seen in the treatment group. In fig 2, the diamond shape is found on the left of the line of no effect, meaning that less diarrhoea (fewer episodes) was seen in the probiotic group than in the placebo grouping. If the diamond touches the line of no effect (where the OR is one) and so there is no statistically significant departure between the groups existence compared. In fig 2, the diamond shape does not touch on the line of no effect (that is, the confidence interval for the odds ratio does non include 1) and this means that the difference found betwixt the two groups was statistically significant.
APPRAISING A SYSTEMATIC REVIEW WITH OR WITHOUT META-Analysis
Although systematic reviews occupy the highest position in the hierarchy of bear witness for manufactures on effectiveness of interventions,eight it should not be assumed that a report is valid only because it is stated to be an systematic review. But every bit in RCTs, the principal problems to consider when appraising a systematic review tin be condensed into three of import areas8:
-
The validity of the trial methodology.
-
The magnitude and precision of the treatment outcome.
-
The applicability of the results to your patient or population.
Box 1 shows a list of 10 questions that may be used to appraise a systematic review in all 3 areas.xvi
Box 1: Questions to consider when appraising a systematic reviewxvi
-
Did the review address a conspicuously focused question?
-
Did the review include the right type of study?
-
Did the reviewers endeavor to identify all relevant studies?
-
Did the reviewers assess the quality of all the studies included?
-
If the results of the report have been combined, was it reasonable to do so?
-
How are the results presented and what are the main results?
-
How precise are the results?
-
Tin the results be practical to your local population?
-
Were all important outcomes considered?
-
Should practice or policy modify equally a result of the evidence contained in this review?
ASSESSING THE VALIDITY OF TRIAL METHODOLOGY
Focused research question
Like all research reports, the authors should conspicuously country the research question at the outset. The research question should include the relevant population or patient groups being studied, the intervention of interest, any comparators (where relevant), and the outcomes of interest. Keywords from the research question and their synonyms are usually used to identify studies for inclusion in the review.
Types of studies included in the review
The validity of a systematic review or meta-analysis depends heavily on the validity of the studies included. The authors should explicitly land the type of studies they take included in their review, and readers of such reports should decide whether the included studies take the appropriate written report design to answer the clinical question. In a recent systematic review which determined the furnishings of glutamine supplementation on morbidity and weight gain in preterm babies the investigators based their review only on RCTs.17
Search strategy used to identify relevant manufactures
There is testify that single electronic database searches lack sensitivity and relevant articles may be missed if only i database is searched. Dickersin et al showed that only 30–80% of all known published RCTs were identifiable using MEDLINE.18 Even if relevant records are in a database, information technology tin can be difficult to retrieve them easily. A comprehensive search is therefore important, not only for ensuring that as many studies as possible are identified but likewise to minimise selection bias for those that are found. Relying exclusively on one database may recall a set of studies that are unrepresentative of all studies that would have been identified through a comprehensive search of multiple sources. Therefore, in order to retrieve all relevant studies on a topic, several unlike sources should exist searched to place relevant studies (published and unpublished), and the search strategy should non be express to the English linguistic communication. The aim of an extensive search is to avoid the problem of publication bias which occurs when trials with statistically significant results are more likely to be published and cited, and are preferentially published in English linguistic communication journals and those indexed in Medline.
In the systematic review referred to to a higher place, which examined the effects of glutamine supplementation on morbidity and weight proceeds in preterm babies, the authors searched the Cochrane controlled trials register, Medline, and Embase,17 and they also mitt searched selected journals, cross referencing where necessary from other publications.
Quality cess of included trials
The reviewers should state a predetermined method for assessing the eligibility and quality of the studies included. At least two reviewers should independently assess the quality of the included studies to minimise the risk of pick bias. In that location is bear witness that using at least two reviewers has an of import effect on reducing the possibility that relevant reports volition exist discarded.xix
Pooling results and heterogeneity
If the results of the individual studies were pooled in a meta-assay, it is of import to determine whether it was reasonable to do then. A clinical judgement should exist made almost whether information technology was reasonable for the studies to be combined based on whether the individual trials differed considerably in populations studied, interventions and comparisons used, or outcomes measured.
The statistical validity of combining the results of the diverse trials should be assessed past looking for homogeneity of the outcomes from the various trials. In other words, there should be some consistency in the results of the included trials. One way of doing this is to inspect the graphical display of results of the individual studies (forest plot, run across above) looking for similarities in the management of the results. When the results differ greatly in their direction—that is, if there is significant heterogeneity—then it may not be wise for the results to exist pooled. Some articles may likewise report a statistical examination for heterogeneity, but it should be noted that the statistical power of many meta-analyses is unremarkably too depression to allow the detection of heterogeneity based on statistical tests. If a study finds meaning heterogeneity among reports, the authors should attempt to offer explanations for potential sources of the heterogeneity.
Magnitude of the handling effect
Common measures used to report the results of meta-analyses include the odds ratio, relative risk, and mean differences. If the issue is binary (for example, affliction five no disease, remission v no remission), odds ratios or relative risks are used. If the outcome is continuous (for example, claret pressure measurement), mean differences may be used.
ODDS RATIOS AND RELATIVE RISKS
Odds and odds ratio
The odds for a group is defined as the number of patients in the group who achieve the stated end betoken divided by the number of patients who do not. For example, the odds of acne resolution during handling with an antibody in a group of ten patients may be half-dozen to iv (half-dozen with resolution of acne divided by 4 without = ane.five); in a command grouping the odds may be 3 to seven (0.43). The odds ratio, equally the name implies, is a ratio of ii odds. It is simply defined every bit the ratio of the odds of the treatment group to the odds of the control group. In our example, the odds ratio of handling to control group would be 3.5 (1.5 divided by 0.43).
Risk and relative take a chance
Chance, as opposed to odds, is calculated equally the number of patients in the grouping who achieve the stated stop point divided by the total number of patients in the group. Risk ratio or relative adventure is a ratio of ii "risks". In the example to a higher place the risks would be 6 in 10 in the treatment group (half-dozen divided past x = 0.6) and 3 in 10 in the control grouping (0.iii), giving a take chances ratio, or relative run a risk of 2 (0.6 divided by 0.iii).
Interpretation of odds ratios and relative risk
An odds ratio or relative take a chance greater than 1 indicates increased likelihood of the stated event being achieved in the treatment group. If the odds ratio or relative adventure is less than 1, there is a decreased likelihood in the treatment group. A ratio of 1 indicates no difference—that is, the event is just as likely to occur in the handling grouping equally it is in the control grouping.11 Equally in all estimates of treatment effect, odds ratios or relative risks reported in meta-analysis should be accompanied by confidence intervals.
Readers should sympathise that the odds ratio will be close to the relative take chances if the end point occurs relatively infrequently, say in less than 20%.15 If the consequence is more mutual, then the odds ratio will considerably overestimate the relative risk. The advantages and disadvantages of odds ratios v relative risks in the reporting of the results of meta-analysis accept been reviewed elsewhere.12
Precision of the handling issue: conviction intervals
As stated earlier, confidence intervals should back-trail estimates of treatment effects. I discussed the concept of confidence intervals in the 2d commodity of the series.eight Ninety v per cent confidence intervals are commonly reported, but other intervals such every bit 90% or 99% are besides sometimes used. The 95% CI of an approximate (for example, of odds ratios or relative risks) will be the range within which we are 95% certain that the true population treatment effect will lie. The width of a confidence interval indicates the precision of the estimate. The wider the interval, the less the precision. A very long interval makes united states of america less sure virtually the accuracy of a study in predicting the truthful size of the effect. If the confidence interval for relative risk or odds ratio for an estimate includes one, then we take been unable to demonstrate a statistically pregnant difference between the groups being compared; if it does not include 1, then nosotros say that there is a statistically pregnant difference.
APPLICABILITY OF RESULTS TO PATIENTS
Health intendance professionals should always make judgements near whether the results of a particular study are applicable to their own patient or group of patients. Some of the bug that ane need to consider before deciding whether to incorporate a item slice of research evidence into clinical do were discussed in the second commodity of the series.8 These include similarity of report population to your population, benefit v impairment, patients preferences, availability, and costs.
CONCLUSIONS
Systematic reviews apply scientific strategies to provide in an explicit manner a summary of all studies addressing a specific question, thereby allowing an account to be taken of the whole range of relevant findings on a detail topic. Meta-analysis, which may accompany a systematic review, tin can increment power and precision of estimates of handling effects. People working in the field of paediatrics and child health should empathise the fundamental principles of systematic reviews and meta-analyses, including the ability to apply disquisitional appraisal not only to the methodologies of review articles, simply likewise to the applicability of the results to their own patients.
REFERENCES
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
Asking Permissions
If you wish to reuse whatever or all of this article please use the link below which will take y'all to the Copyright Clearance Centre's RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different means.
Copyright information:
Copyright 2005 Archives of Illness in Childhood
Source: https://adc.bmj.com/content/90/8/845
0 Response to "What Is the Difference Between Systematic Review and Meta Analysis"
Post a Comment