How to make a systematic review live up to its name: perspectives from journal editors
Editorial Commentary

How to make a systematic review live up to its name: perspectives from journal editors

Binghan Shang#, Yao Lin#, Fanghui Yang, Kaiping Zhang

Editorial Office, Annals of Translational Medicine, AME Publishing Company, Hong Kong, China

#These authors contributed equally to this work.

Correspondence to: Kaiping Zhang. AME Publishing Company, Hong Kong, China. Email: zhangkp@amegroups.com.

Keywords: Systematic review (SR); review types; editors’ perspectives; methodology; evidence-based medicine


Submitted Dec 12, 2022. Accepted for publication Apr 30, 2023. Published online May 10, 2023.

doi: 10.21037/atm-22-6305


Since its introduction in 1992, evidence-based medicine has received increasing attention (1). Moreover, the idea of setting out medical evidence hierarchically using a pyramid has played an essential role in policy-making (2). Taking the making of policy in clinical practice as an example, the policymakers need to systematically collect all evidence on a topic, evaluate evidence hierarchy and quality grading, and integrate potential patient benefits, in order to give the strength of recommendation for each treatment option. Among the various types of research, systematic reviews (SRs) and meta-analyses have occupied the top of this pyramid for good reason. However, SRs have received growing criticism in recent years, mainly due to its excessive surge and the growing number of low-quality SRs. A key reason for the rapid increase in the number of SRs in the literature is that evidence-based medicine has permeated many medical disciplines. Ioannidis et al. reported that between 1991 and 2014, the number of primary literatures tagged as SR in PubMed surged from 1,024 to 28,959, with a growth rate as high as 2,728% (3). However, this surge is considered to be misleading or a sign of overproduction, and has prompted criticism due to the suboptimal methodological rigor of many articles claiming to be SRs (3). Research has shown that only about 3% of SRs have good methodological quality, report result transparently, and have usable clinical evidence for making treatment decisions (4). Even SRs published in top medical journals are no exception, with only 1% of those published considered to be of high quality (5). A recent living SR targeting the problems with SRs further highlights many flaws in the conduct, methods and reporting of published SRs (6).

The reasons behind this lack of high-quality SRs likely include the increased demand for information integration in the era of big data, the difficulty in assuring sufficient effort and cost investment, a lack of professional training in conducting SRs, a lack of SRs-related expertise, and the challenge of not having enough collaborators [especially during the coronavirus disease 2019 (COVID-19) pandemic period]. Of these issues, the latter two (a lack of relevant expertise and having insufficient number of collaborators) are frequently encountered in our work as journal editors. In particular, these two issues are often reflected in the misclassification of different types of reviews (i.e., many articles that claim to be SRs are, in fact, not SRs) and the byline of only one author appearing in the SRs.

This commentary mainly aims to discuss these two issues from the perspective of journal editors and to give some suggestions on how these issues can be addressed to improve the quality of SRs.


Issues in the misclassification of review types

The reasons for reviews being misclassified are multifaceted. A lack of SRs-related expertise is one prevalent reason which is encountered during our editorial work. Authors usually do not have sufficient knowledge in identifying various reviews. For example, we have received manuscripts entitled “SR” in which the search strategy is not systematic at all (e.g., a search of a single database with a limited time frame and search terms) and the risk of bias assessment is even omitted. Therefore, authors need to understand the definition and basic requirements of SRs, and the similarities and differences between types of reviews.

For the definition and requirements of an SR, the Cochrane Database of Systematic Reviews—the leading journal and database for SRs in healthcare—defines an SR of interventional studies as “a review that uses explicit, systematic methods to collate and synthesize findings of studies that address a clearly formulated question” (7). The core methodology includes: (I) determining the review’s scope and questions; (II) inclusion criteria and grouping for synthesis; (III) searching for and selecting studies; (IV) collecting data; (V) effect measures; (VI) bias and conflicts of interest; (VII) assessing risk of bias; (VIII) bias due to missing results; (IX) ‘Summary of findings’ tables and/or GRADE; (X) interpreting the results (7). Therefore, it would be inappropriate to classify a review that omits key steps from the above methodology (e.g., steps III and VII) as an SR. It is our view that only articles that have performed an explicit and systematic search and integration (qualitative or quantitative), and that have objectively assessed the risk of bias of the papers included, have a rationale to be classified as an SR.

Additionally, authors also need to be familiar with other classified reviews to avoid confusing the SRs with other types of reviews. To the best of our knowledge, there are no internationally accepted rules for distinguishing the various types of reviews at present. Grant et al. (8) recognized this problem more than a decade ago and defined the various types of reviews and their characteristics in detail. Nevertheless, Sutton et al. (9) found ‘frequent inconsistencies or overlaps between the descriptions of nominally different review types’. Bougioukas et al. (10) recently categorized overviews of reviews in health care into seven types based on methodological approach. Although the classification by Bougioukas et al. is on overviews of reviews, it has great value for us to classify a broader range of reviews. In this article, we briefly summarize the characteristics of the most frequently published types of reviews (Table 1).

Table 1

Characteristics of the most common types of reviews

Type Description Protocol Methods Reporting guideline Example
Search Quality appraisal Synthesis Analysis
Narrative review A literature review is both a summary and an explanation of the complete and current state of knowledge on a limited topic, as found in academic books and journal articles; also known as a “literature review” NA May or may not include comprehensive searching: narrative reviews are not always explicit in their methods May or may not include quality assessment Typically, narrative Analysis may be chronological, conceptual, thematic, etc. NA Liu et al. (PMID: 35434035)
Mini review A shorter review of topics that may be controversial or unresolved compared to a traditional review NA May or may not include comprehensive searching: mini reviews are not always explicit in their methods May or may not include quality assessment Typically, narrative NA NA Korasidis et al. (PMID: 27826571)
Systematic review Seeks to systematically search for, appraise, and synthesize evidence from primary studies, often adhering to guidelines on the conduct of a review Yes (e.g., Cochrane Database of Systematic Reviews and PROSPERO) Aims to conduct an exhaustive, comprehensive search: (I) definitely employing more than one database; grey literature should be included; (II) recommended supplementary search methods include hand searching, reference list checking, citation searching, and contact with experts Assessment of the risk of bias of the included primary studies; quality assessment may determine the inclusion/exclusion of studies Typically, narrative with tabular accompaniment (I) What is known; recommendations for practice; (II) what remains unknown; uncertainty around findings, recommendations for future research PRISMA Pellicori et al. (PMID: 33704775)
Overview of reviews Uses explicit and systematic methods to search for and identify multiple systematic reviews on a similar topic for the purpose of extracting and analyzing their results across important outcomes Yes (e.g., PROSPERO) (I) The unit of searching, inclusion, and data analysis is the systematic review; (II) managing overlapping systematic reviews Assessment of the methodological quality/risk of bias of the included systematic reviews; also, risk of bias assessments for primary studies contained within the included systematic reviews Typically, narrative with tabular accompaniment There are two main ways to analyze outcome data: (I) summarizing outcome data: data should be extracted as they were reported in the underlying systematic reviews; (II) re-analyzing outcome data: extracting relevant outcome data from the included systematic reviews and re-analyzing these data in a way that differs from the original analyses conducted in the systematic reviews PRIOR Xiong et al. (PMID: 31838477)
Rapid review A type of knowledge synthesis, limited by time or resources, in which components of the systematic review process are simplified or omitted to produce information in a short period of time; also known as “rapid evidence synthesis” NA (I) Should involve detailed negotiation between the review team and the client/customer regarding the scope and methods to establish how they will be delivered within the time available; (II) the search process may be abbreviated, or the appraisal, synthesis, or analysis stages removed or simplified Time-limited formal quality assessment Typically, narrative and tabular The quantity of literature and the overall quality/direction of the effect of the literature NA Nussbaumer-Streit et al.
(PMID: 33959956)
Scoping review Preliminary assessment of the potential size and scope of available research literature. Aims to identify the nature and extent of research evidence (usually including ongoing research); also known as a “scoping study” Recommended (e.g., OSF) Completeness of the search is determined by time/scope constraints; literature may include research in progress NA Typically, tabular with some narrative commentary Characterizes the quantity and quality of literature, perhaps by study design and other key features; attempts to specify a viable review PRISMA-ScR Rellum et al.
(PMID: 35070381)
Living reviews A type of reviews that continually incorporate relevant new evidence, when it becomes available Depending on the type of revie. For example, for the living overview of reviews, it is recommended to (I) register the protocol on PROSPERO; perform searches, quality appraisal, synthesis, and analysis according to the methodological requirements of the overview of review; and follow the PRIOR reporting guideline; (II) update the review when new peer-reviewed evidence that significantly alters the direction or strength of original conclusions emerges Khalili et al. (PMID 33326318)

NA, not applicable; PRIOR, Preferred Reporting Items for Overviews of Reviews; PROSPERO, International Prospective Register of Systematic Reviews; OSF, Open Science Framework; PRISMA, Preferred Reporting Items for Systematic reviews and Meta-Analyses; PRISMA-ScR, Preferred Reporting Items for Systematic reviews and Meta-Analyses for Scoping Reviews.


Issues concerning the number of authors of an SR

The other issue we would like to address is the difficulty of having enough collaborators. The Cochrane Database of Systematic Reviews has proposed that an SR should be conducted by a team, instead of a single reviewer, to minimize the possibility of errors (7). Similarly, the most used tool for assessing the quality of SRs, Assessing the Methodological Quality of Systematic Reviews (AMSTAR), mentions that “best practice for quality assessment with this tool requires two review authors to determine eligibility of studies for inclusion in systematic review” (11). Moreover, the Risk of Bias Assessment Tool for Systematic Reviews (ROBIS) also suggests that the risk of bias assessment, screening of titles and abstracts, and assessment of full-text inclusion should involve at least two reviewers (12). Bougioukas et al. analyzed 1,558 healthcare-related SRs published between 2000 and 2020 and found that while the average number of authors was 5 (interquartile range, 3–7), 48 articles (3.1%) had only 1 author (13). Puljak also declared that if an SR was conducted by a single author, it should not be called an SR and should be rejected by the journal editors (14).

Among the manuscripts submitted to our journals, some SRs indeed have only one author. In such cases, the article is usually rejected. As we know, researchers who have limited resources may have to carry out the SRs alone, without incorporating two or more reviewers throughout the steps. A very common situation is the SRs by students, either as a course assignment or as a dissertation. Students who independently carry out an SR are essential for them to receive credits and degrees, yet looking for collaborators could lead to a collusion issue. The subsequent publication of such SRs is challenging. In our editorial work, we have also encountered some unique scenarios in which an SR has a sole author. For example, Kyzas explained his abundant experience in SRs, declared the potential risk of selection bias, and stressed that his results should be interpreted with caution (15). The author of another SR (16), Koscielny, emphasized that other collaborators had been engaged to reduce the bias in the methods section and credited them in the acknowledgments instead of the authorship.

SRs conducted by a single author indeed prompt concerns about their methodological quality and the reliability of their results. One study showed that on average, a single reviewer misses 8% of eligible reports, whereas paired reviewers do not miss any records (17). Furthermore, it has been reported that a single reviewer increases inaccuracy and decreases identification in eligibility of studies (18). Moreover, with a single reviewer, some bias in the results of quality appraisal of a study is inevitable. Therefore, whenever possible, authors should incorporate at least two reviewers when conducting an SR.


Editors’ advice for carrying out a high-quality SR

Here, we give four recommendations for conducting a high-quality SR, including advice on addressing the two issues mentioned above.

Conducting an SR with high standards is paramount to providing high-quality empirical evidence for decision-making in health policy and practice

Any academics, clinicians, or health professionals conducting an SR should ideally receive SR-specific systematic training and guidance to ensure they have sufficient expertise. This process might include reading the Cochrane Handbook, attending well-rounded courses, and participating in hands-on training workshops. A study found that the inter-rater reliability was improved after intensive training in Cochrane’s risk of bias assessment (19). In addition, professionally trained personnel will have higher accuracy in SRs adjudication, while the potential for misclassification of review types can be reduced.

Furthermore, whenever possible, authors should register and develop a detailed protocol before conducting SRs. By doing so, authors can identify completed or ongoing research before conducting their SRs, which can avoid unintended duplication and redundancy. Creating a detailed protocol can also aid in clarifying the research question, developing a thorough and comprehensive search strategy, setting out clear eligibility criteria, etc. Furthermore, the publication of protocols minimizes the potential publication bias and prevents the masking of non-favorable clinical outcomes. Research shows that SRs with published protocols tend to be more transparently elaborated, as well as being of a higher quality (20). Authors are therefore strongly recommended to proactively and prospectively register their SRs in a relevant registry, such as Cochrane Database of Systematic Reviews, the International Prospective Register of Systematic Reviews (PROSPERO), the Registry of Systematic Reviews/Meta-Analyses in Research Registry, and the International Platform of Registered Systematic Review and Meta-analysis Protocols.

Ensuring sufficient collaborators

Conducting an SR with high standards does not mean the necessity of having a large number of authors, but rather enough collaborators to carry out the key steps. An SR is a resource-intensive endeavor. Researchers need to recognize that some key stages, particularly the study selection, and quality appraisal, are best conducted by at least two reviewers. Even for the literature search that is usually conducted by one researcher (preferably an information professional or a librarian), there is a move towards peer-review of search strategies, which would ideally include a second information specialist, though not mandated at this stage.

However, this does not mean that an SR with a single author is absolutely unreliable. Some SRs with only one author actually use at least two reviewers to conduct the key steps of SRs, but these collaborators are not involved in other work that would allow them to meet the criteria for authorship (16). Specifically, the International Committee of Medical Journal Editors (ICMJE) has clear eligibility criteria for authorship: those who are not involved in the conception, design, manuscript writing, or final approval of a manuscript should not be included in its author list. Of note, when more than one researcher is involved in an SR but only one meets the authorship criteria, the author should clearly acknowledge the other contributors and their corresponding contributions to the paper. For SRs conducted by a student, after completing the project independently and receiving credits or a degree, it is important to find collaborators to repeat and review the essential steps of the SR when possible before submitting the manuscript to an academic journal.

Making good use of automated tools to improve efficiency

Carrying out SRs is a time-consuming work. However, some researchers underestimate the effort required for conducting an SR, which can lead to delays and inability to complete the review. Many innovations and approaches are emerging to improve the efficiency of conducting SRs without compromising on reproducibility or accuracy, such as automation tools and other tools (e.g., tools catalogued in The Systematic Review Toolbox, http://www.systematicreviewtools.com/). Study has proved that using tools has the potential to reduce the workload, save time, and maintain the methodological quality (21). In one case study, the average time spent on the review task by the automation team was far shorter than that spent by the manual team, but the error rates of titles, abstracts, and full-text screening were similar in both groups (22). However, automated tools are not currently widely adopted, and much of the use is still based on experience delivery by self, colleagues, or peers. Even in studies with decent adoption rates, automated tools are most often used in the screening phase, with less satisfactory adoption in other phases (23). Barriers that researchers reported mainly include difficulties in obtaining licensing, lack of knowledge and steep learning curve, technical issues, lack of support, mismatch to current workflow, values and practices, and insufficient trust in the tool (23,24).

In the future, broader use of automated tools for SRs may need a joint effort by multiple stakeholders. (I) As developers of automated tools, they need to ensure that the tools are as user-friendly as possible, by inviting more researchers to join early in the development process; also, ensure consistency with existing workflows, values, and practices as much as possible; shorten the learning curve by featuring step-by-step educational videos; and, provide a variety of handbooks of technical issues solutions. (II) As academic institutions and public platforms [e.g., Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) website, academic societies, and public health websites], conduct research to generalize and promote the robustness of automated tools throughout the process of SRs. (III) As researchers, perhaps a mindset needs to be cultivated to balance between the temptation of existing experience and tools that take one out of comfort zone: are there any automated tools that can help me work more effectively when doing this new SR compared to my previous experience?

Reporting transparently and completely following reporting guidelines

For SRs, one of the most widely applied guidelines is PRISMA. Previous study has found an improvement in the quality and completeness of SRs reporting when authors adhere to the PRISMA guidelines (25). Authors who comply to the recognized guidelines also tend to project a good first impression upon submitting their articles to journal editors and reviewers. Authors can obtain more extensions for different types of SRs on the PRISMA website (http://www.prisma-statement.org/Extensions/).


Summary

When conducting an SR, a lack of expertise and an insufficient number of collaborators are common barriers. However, these are not the rationales for lowering the quality of the SRs. An SR should be conducted to the highest possible standard. Any academic who performs an SR should ensure they have the relevant expertise, which can not only help to differentiate the SRs from other types of reviews but also make the results more clinically relevant and reliable. Furthermore, conducting an SR requires a sufficient number of collaborators to ensure the robustness of the results. Of note, the proper use of automation tools can help to overcome the workload-related challenges of conducting an SR. It must also be emphasized that reporting guidelines play a pivotal role in keeping the transparency for the publication of the SRs. Finally, as journal editors, we would like to remind authors that they are not alone; editors and reviewers are also their partners. Throughout the publication process, we all work together for high-quality evidence-based medicine.


Acknowledgments

We thank Jennifer Reynolds for polishing the language of this paper.

Funding: None.


Footnote

Provenance and Peer Review: This article was commissioned by the editorial office, Annals of Translational Medicine. The article has undergone external peer review.

Peer Review File: Available at https://atm.amegroups.com/article/view/10.21037/atm-22-6305/prf

Conflicts of Interest: The authors have completed the ICMJE uniform disclosure form (available at https://atm.amegroups.com/article/view/10.21037/atm-22-6305/coif). All authors report that they are full-time employees of AME Publishing Company (publishers of Annals of Translational Medicine). The authors have no other conflicts of interest to declare.

Ethical Statement: The authors are accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.

Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the non-commercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/.


References

  1. Evidence-based medicine. A new approach to teaching the practice of medicine. JAMA 1992;268:2420-5. [Crossref] [PubMed]
  2. Magni P, Bier DM, Pecorelli S, et al. Perspective: Improving Nutritional Guidelines for Sustainable Health Policies: Current Status and Perspectives. Adv Nutr 2017;8:532-45. [PubMed]
  3. Ioannidis JP. The Mass Production of Redundant, Misleading, and Conflicted Systematic Reviews and Meta-analyses. Milbank Q 2016;94:485-514. [Crossref] [PubMed]
  4. Niforatos JD, Weaver M, Johansen ME. Assessment of Publication Trends of Systematic Reviews and Randomized Clinical Trials, 1995 to 2017. JAMA Intern Med 2019;179:1593-4. [Crossref] [PubMed]
  5. Nascimento DP, Almeida MO, Scola L, et al. Letter to the Editor - Not even the top general medical journals are free of spin: A wake-up call based on an overview of reviews. J Clin Epidemiol 2021;139:232-4. [Crossref] [PubMed]
  6. Uttley L, Quintana DS, Montgomery P, et al. The problems with systematic reviews: a living systematic review. J Clin Epidemiol 2023;156:30-41. [Crossref] [PubMed]
  7. Higgins J, Thomas J, Chandler J, et al. Cochrane handbook for systematic reviews of interventions: version 6.3. London: Cochrane 2022. Available online: https://training.cochrane.org/handbook/current
  8. Grant MJ, Booth A. A typology of reviews: an analysis of 14 review types and associated methodologies. Health Info Libr J 2009;26:91-108. [Crossref] [PubMed]
  9. Sutton A, Clowes M, Preston L, et al. Meeting the review family: exploring review types and associated information retrieval requirements. Health Info Libr J 2019;36:202-22. [Crossref] [PubMed]
  10. Bougioukas KI, Pamporis K, Vounzoulaki E, et al. Types and associated methodologies of overviews of reviews in health care: a methodological study with published examples. J Clin Epidemiol 2023;153:13-25. [Crossref] [PubMed]
  11. Shea BJ, Reeves BC, Wells G, et al. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. BMJ 2017;358:j4008. [Crossref] [PubMed]
  12. Whiting P, Savović J, Higgins JP, et al. ROBIS: A new tool to assess risk of bias in systematic reviews was developed. J Clin Epidemiol 2016;69:225-34. [Crossref] [PubMed]
  13. Bougioukas KI, Vounzoulaki E, Mantsiou CD, et al. Global mapping of overviews of systematic reviews in healthcare published between 2000 and 2020: a bibliometric analysis. J Clin Epidemiol 2021;137:58-72. [Crossref] [PubMed]
  14. Puljak L. If there is only one author or only one database was searched, a study should not be called a systematic review. J Clin Epidemiol 2017;91:4-5. [Crossref] [PubMed]
  15. Kyzas P. The impact of volume and surgical throughput on outcomes in head and neck reconstruction: a systematic review. Front Oral Maxillofac Med 2022;4:23. [Crossref]
  16. Koscielny A. What is the value of animal models in laparoscopic surgery?—a systematic review. Ann Laparosc Endosc Surg 2022;7:37. [Crossref]
  17. Robson RC, Pham B, Hwee J, et al. Few studies exist examining methods for selecting studies, abstracting data, and appraising quality in a systematic review. J Clin Epidemiol 2019;106:121-35. [Crossref] [PubMed]
  18. Doust JA, Pietrzak E, Sanders S, et al. Identifying studies for systematic reviews of diagnostic tests was difficult due to the poor sensitivity and precision of methodologic filters and the lack of information in the abstract. J Clin Epidemiol 2005;58:444-9. [Crossref] [PubMed]
  19. da Costa BR, Beckett B, Diaz A, et al. Effect of standardized training on the reliability of the Cochrane risk of bias assessment tool: a prospective study. Syst Rev 2017;6:44. [Crossref] [PubMed]
  20. Allers K, Hoffmann F, Mathes T, et al. Systematic reviews with published protocols compared to those without: more effort, older search. J Clin Epidemiol 2018;95:102-10. [Crossref] [PubMed]
  21. Gates A, Gates M, Sebastianski M, et al. The semi-automation of title and abstract screening: a retrospective exploration of ways to leverage Abstrackr's relevance predictions in systematic and rapid reviews. BMC Med Res Methodol 2020;20:139. [Crossref] [PubMed]
  22. Clark J, McFarlane C, Cleo G, et al. The Impact of Systematic Review Automation Tools on Methodological Quality and Time Taken to Complete Systematic Review Tasks: Case Study. JMIR Med Educ 2021;7:e24418. [Crossref] [PubMed]
  23. Scott AM, Forbes C, Clark J, et al. Systematic review automation tools improve efficiency but lack of knowledge impedes their adoption: a survey. J Clin Epidemiol 2021;138:80-94. [Crossref] [PubMed]
  24. Arno A, Elliott J, Wallace B, et al. The views of health guideline developers on the use of automation in health evidence synthesis. Syst Rev 2021;10:16. [Crossref] [PubMed]
  25. Sun X, Zhou X, Yu Y, et al. Exploring reporting quality of systematic reviews and Meta-analyses on nursing interventions in patients with Alzheimer's disease before and after PRISMA introduction. BMC Med Res Methodol 2018;18:154. [Crossref] [PubMed]
Cite this article as: Shang B, Lin Y, Yang F, Zhang K. How to make a systematic review live up to its name: perspectives from journal editors. Ann Transl Med 2023;11(9):325. doi: 10.21037/atm-22-6305

Download Citation