Systematic review and meta-analysis are essential tools for synthesizing the evidence needed to inform decision making. Systematic reviews summarize the available literature using specific search parameters, followed by critical appraisal and logical synthesis of multiple primary studies.
Meta-analysis refers to the statistical analysis of data from independent primary studies focused on the same question, whose objective is to generate a quantitative estimate of the phenomenon studied, for example, the effectiveness of the intervention. In clinical research, systematic reviews and meta-analyses are a fundamental part of evidence-based medicine. However, in basic science, attempts to assess previous literature in such a rigorous and quantitative manner are rare, and narrative reviews prevail.
How to Prepare Meta-Analyses
Meta-analyses can be a challenging undertaking, requiring tedious review and statistical understanding. Software packages that support meta-analysis include the MetaXL and Mix 2.0 Excel add-ins, Revman, Comprehensive Meta-Analysis Software, JASP, and the MetaFOR library for R. Although these packages can be adapted to basic science projects, difficulties can arise due to the specific characteristics of basic science studies, such as large and complex data sets and the heterogeneity of experimental methodology.
Validity of tests in the basic sciences
To assess the translational potential of basic research, the validity of the evidence must first be assessed, usually by examining the approach taken to collect and evaluate the data. Basic science studies are broadly grouped as hypothesis-generating and hypothesis-based. The former tend to be proof-of-principle studies with small samples and are often exploratory and less valid than the latter.
It can even be argued that studies reporting novel results also belong to this group, as their results are still subject to external validation before being accepted by the wider scientific community. On the other hand, hypothesis-based studies are based on what is known or what previous work suggests. These studies can also validate previous experimental findings with incremental contributions. Although these studies are often overlooked and even dismissed due to lack of substantial novelty, their role in external validation of earlier work is critical to establishing the translational potential of the findings.
Selection of the Experimental Model
Another dimension of test validity in the basic sciences is the selection of the experimental model. The human condition is nearly impossible to recapitulate in a laboratory setting, so experimental models (eg, cell lines, primary cells, animal models) are used to mimic the phenomenon of interest, albeit imperfectly. For these reasons, the best quality evidence comes from evaluating the performance of several independent experimental models.
This is achieved through systematic approaches that consolidate evidence from multiple studies, thus filtering signal from noise and allowing for separate comparison. While systematic reviews can be conducted for qualitative comparison, meta-analytic approaches employ statistical methods that allow for the generation and testing of hypotheses.
When a meta-analysis in the basic sciences is based on a hypothesis, it can be used to assess the translational potential of a given finding and provide recommendations for further clinical and translational studies. On the other hand, if the meta-analysis hypothesis tests are inconclusive, or if exploratory analyzes are performed to examine sources of inconsistency between studies, new hypotheses can be generated, and subsequently tested experimentally.
Meta-analysis Methodology
Search and selection strategies
The first stage of any review involves the formulation of a primary objective in the form of a research question or hypothesis. Reviewers should explicitly define the purpose of the review before starting the project, which serves to reduce the risk of data dredging, in which reviewers subsequently assign meaning to significant findings (Kwon et al, 2015).
Secondary objectives can also be defined; however, caution is warranted as the search strategies formulated for the primary objective may not fully encompass the body of work required to address the secondary objective. Depending on the purpose of a review, reviewers may choose to perform a rapid or systematic review. Although the meta-analytic methodology is similar for systematic and rapid reviews, the scope of the assessed literature tends to be significantly narrower for rapid reviews, allowing the project to move forward more quickly.
Systematic review and Meta-analysis
Systematic reviews involve comprehensive search strategies that allow reviewers to identify all relevant studies on a defined topic (DeLuca et al., 2008). Meta-analytic methods allow reviewers to quantitatively assess and synthesize study results to gain insights into statistical significance and relevance.
Systematic reviews of basic research data have the potential to produce information-rich databases that allow extensive secondary analysis. In order to comprehensively screen the available body of information, the search criteria must be sensitive enough not to miss relevant studies. Key terms and concepts that are expressed as synonymous keywords and index terms, such as Medical Subject Headings (MeSH), must be combined using the Boolean operators AND, OR, and NOT.
Refining the Strategy
Truncations, wildcards, and proximity operators can also help refine a search strategy by including spelling variations and different wordings of the same concept (Ecker & Skelly, 2010). Search strategies can be validated by a selection of anticipated relevant studies. If the search strategy fails to retrieve even one of the selected studies, the search strategy requires further optimization.
This process is repeated, updating the search strategy at each iterative step until the search strategy performs at a satisfactory level. A comprehensive search is expected to return a large number of studies, many of which are not relevant to the topic, commonly resulting in a specificity of <10%. Therefore, the initial stage of screening the library to select relevant studies is time consuming (can take from 6 months to 2 years) and is prone to human error.
At this stage, it is recommended to include at least two independent reviewers to minimize selection bias and related errors. Nevertheless, systematic reviews have the potential to provide the highest quality quantitative evidence synthesis to directly inform basic, preclinical, and translational experimental and computational studies.
Quick review and Meta-analysis
The objective of the quick review, as its name suggests, is to reduce the time needed to synthesize the information. Quick reviews are a suitable alternative to systematic approaches if reviewers prefer to get a general idea of the state of the field without a large investment of time. Search strategies are constructed by increasing the specificity of the search, thus reducing the number of irrelevant studies identified by the search at the expense of the completeness of the search (Haby et al., 2016).
The strength of a rapid review lies in its flexibility to adapt to the needs of the reviewer, resulting in a lack of standardized methodology. Common shortcuts made in quick reviews are:
(i) Restrict search criteria.
(ii) Impose date restrictions.
(iii) Perform the review with a single reviewer.
(iv) Skip expert consultation (ie librarian for search strategy development), (v) restrict language criteria (eg English only).
(vi) Waive the iterative process of searching and selecting search terms.
(vii) Skip the QC checklist criteria and (viii) limit the number of databases searched.
Finding Shortcuts
These shortcuts will limit the initial set of studies returned by the search, thus speeding up the selection process, but can also lead to the exclusion of relevant studies and the introduction of selection bias. Although there is consensus that rapid reviews do not sacrifice quality or synthesize unrepresentative results, it is recommended that critical results be further verified by a systematic review.
However, quick reviews are a viable alternative when parameters need to be estimated for computational modeling. Although systematic and rapid reviews rely on different strategies to select relevant studies, the statistical methods used to synthesise systematic and rapid review data are identical.
Screening and selection
Once the bibliographic search is finished (it is necessary to record the date on which the articles were retrieved from the databases), the articles are extracted and stored in a reference manager for screening. Prior to screening studies, inclusion and exclusion criteria should be defined to ensure consistency in study identification and retrieval. This is especially done when multiple reviewers are involved. The critical steps in screening and selection are:
(1) The removal of duplicates.
(2) Screening of relevant studies by title and abstract.
(3) Inspection of full texts to ensure they meet eligibility criteria.
There are several reference managers available, such as Mendeley and Rayyan, developed specifically to help the selection of systematic reviews.
However, 98% of authors claim to use Endnote, Reference Manager, or RefWorks to prepare their reviews (Lorenzetti and Ghali, 2013). Reference managers often have deduplication capabilities; however, these can be tedious and error-prone (Kwon et al., 2015).
Protocol in Endnote
A protocol for faster and more reliable deduplication in Endnote has recently been proposed (Bramer et al., 2016). The selection of articles should be broad enough not to be dominated by a single laboratory or author. In basic research articles, it is common to find data sets that are reused by the same group in multiple studies.
Therefore, extra precautions should be taken when deciding to include multiple studies published by the same group. At the end of the search, screening, and selection process, the reviewer obtains a complete list of eligible full-text manuscripts. The entire screening and selection process must be reported on a PRISMA diagram. It traces the flow of information throughout the review according to prescribed guidelines published elsewhere.