You are here

Email alerts

Article Text

Editorial
Reflections on the history of systematic reviews
Free
Loading

Statistics from Altmetric.com

    Request Permissions

    If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

    One of the key elements in evidence-based medicine (EBM) is reliable information from research on the benefits and harms of specific interventions, actions or strategies. This is true for resolving uncertainties about interventions that might be used to treat illnesses or improve well-being and also for choosing screening or diagnostic tests, understanding risk factors and estimating the current and future burden of disease. As the principles and practice of EBM have become more accepted and widespread over the last few decades, there has been an accompanying tremendous growth in the number of systematic reviews and wider recognition of their value. From sporadic examples before the 1980s, through the estimated 3000 that were indexed in MEDLINE during the two decades to 2000,1 200 000 or more might now be available.2 More than 10 000 systematic reviews are published every year, and over 30 000 are registered in the prospective registry, PROSPERO.3 They are a vital part of EBM, and many of the reasons that we value them today have echoes in history.

    We have written elsewhere about this history of systematic reviews4–6 but reflect here on two aspects of the history and their relevance to EBM today and in the future: quality and quantity.

    People making decisions and choices about health and social care need access to high-quality evidence from research. Systematic reviews provide this by both highlighting the quality of existing studies and by themselves providing a high-quality summary. In a 1753 example of what we might now call a systematic review, James Lind, in his treatise on scurvy, presented a ‘Critical and Chronological View of what has been published on the subject’. He wrote ‘It became requisite to exhibit a full and impartial view of what had hitherto been published on the scurvy’ and ‘before the subject could be set in a clear and proper light, it was necessary to remove a great deal of rubbish’.7

    This uncovering of rubbish research continues to be an important role for systematic reviews, not least in demonstrating the enormous amount of research waste caused by poor quality studies.8 Furthermore, reviews are now widely accepted as the most reliable source of knowledge from research. They are emphasised as the top of the hierarchy of evidence across the whole range of clinical questions in the 2011 revision of the Oxford Centre for Evidence Based Medicine levels of evidence (www.cebm.net/2011/06/explanation-2011-ocebm-levels-evidence) and are the core building blocks for clinical and policy guidelines published by organisations such as the WHO.9 This dominance of systematic reviews looks set to continue, but must be tempered by the need for users to appraise the reviews they might use and not simply assume that the label ‘systematic review’ implies quality.10

    Turning to quantity, or more appropriately perhaps, quantitative analysis, when Gene Glass introduced the term ‘meta-analysis’, it was to capture in part the combining of results from separate studies.11 This statistical synthesis of the findings of separate but similar studies is now a prominent feature in systematic reviews, with perhaps millions of meta-analyses across the hundreds of thousands of reviews in the literature. However, history shows us that this is not new. In the 1720s, James Jurin, the secretary of the Royal Society, Thomas Nettleton and John Gasper Scheuchzer combined the results of multiple studies to estimate the effects of inoculation for smallpox in England.12–15

    In the early 20th century, Karl Pearson, director of the Biometric Laboratory at University College London, England, combined five studies of immunity and six studies of mortality to investigate the effects of a vaccine against typhoid,16 and Park and his colleagues synthesised the results of three studies of serum treatment for lobar pneumonia.17

    The mathematical techniques have been refined over time,2 18 and systematic reviewers today are able to draw on numerous statistical packages to help them and to draw the ubiquitous forest plots to display their results.19 However, the future may see important developments in both the data used in these analyses and the way that studies are brought together. One of the first individual participant data meta-analyses used data from nearly 2500 patients who had taken part in nine trials to assess the effects of anticoagulant therapies after myocardial infarction.20 Today, increased access to this type of data looks set to deliver more such reviews,21 22 and the newer techniques of mixed treatment comparisons or network meta-analyses look set to change how studies are combined to identify the most effective, acceptably safe interventions.23

    In summary, an understanding of history can reveal that things we might think of as ‘new’ or novel often have a past that goes back decades or centuries. This is the case with the need for systematic reviews and the value of science accumulating evidence.24 What seems different now is how wider awareness of the need for robust evidence and technological and other advances make it much easier to find and review individual studies, leading to such dramatic increases in the number of systematic reviews. This raises the importance for reviews themselves not to become wasteful, to be kept up-to-date, and to be done to the high standards needed to justify their status in evidence-informed decision making.

    References

    1. 1.
      1. LeeWL,
      2. BausellRB,
      3. BermanBM
      .The growth of health-related meta-analyses published from 1980 to 2000.Eval Health Prof2001;24:32735.doi:10.1177/01632780122034948
    2. 2.
      1. GurevitchJ,
      2. KorichevaJ,
      3. NakagawaS,et al
      .Meta-analysis and the science of research synthesis.Nature2018;555:17582.doi:10.1038/nature25753
    3. 3.
      1. PageMJ,
      2. ShamseerL,
      3. TriccoAC
      .Registration of systematic reviews in PROSPERO: 30,000 records and counting.Syst Rev2018;7:32.doi:10.1186/s13643-018-0699-4
    4. 4.
      1. ChalmersI,
      2. HedgesLV,
      3. CooperH
      .A brief history of research synthesis.Eval Health Prof2002;25:1237.doi:10.1177/0163278702025001003
    5. 5.
      1. ClarkeM
      .History of evidence synthesis to assess treatment effects: personal reflections on something that is very much alive.J R Soc Med2016;109:15463.doi:10.1177/0141076816640243
    6. 6.
      1. StarrM,
      2. ChalmersI,
      3. ClarkeM,et al
      .The origins, evolution, and future of The Cochrane Database of Systematic Reviews.Int J Technol Assess Health Care2009;25(Suppl 1):18295.doi:10.1017/S026646230909062X
    7. 7.
      1. LindJ
      .A treatise on the scurvy.In:Three parts, containing an inquiry into the nature, causes, an cure, of that disease, together with a critical and chronological view of what has been published on the subject.Edinburgh:Printed by Sands, Murray and Cochran for A Kincaid and A Donaldson,1753.
    8. 8.
      1. IoannidisJP,
      2. GreenlandS,
      3. HlatkyMA,et al
      .Increasing value and reducing waste in research design, conduct, and analysis.Lancet2014;383:16675.doi:10.1016/S0140-6736(13)62227-8
    9. 9.
      WHO.WHO handbook for guideline development – 2nd edition.2014http://apps.who.int/medicinedocs/documents/s22083en/s22083en.pdf
    10. 10.
      1. IoannidisJP
      .The Mass Production of Redundant, Misleading, and Conflicted Systematic Reviews and Meta-analyses.Milbank Q2016;94:485514.doi:10.1111/1468-0009.12210
    11. 11.
      1. GlassGV
      .Primary, secondary, and meta-analysis of research.Educ Res1976;5:38.doi:10.3102/0013189X005010003
    12. 12.
      1. BoylstonA
      .Thomas Nettleton and the dawn of quantitative assessments of the effects of medical interventions. - JLL Bulletin: Commentaries on the history of treatment evaluation. The James Lind Library.2010http://www.jameslindlibrary.org/articles/thomas-nettleton-and-the-dawn-of-quantitative-assessments-of-the-effects-of-medical-interventions/ (accessed 13 Apr 2018).
    13. 13.
      1. HuthE
      .Quantitative evidence for judgments on the efficacy of inoculation for the prevention of smallpox: England and New England in the 1700s - The James Lind Library.2005http://www.jameslindlibrary.org/articles/quantitative-evidence-for-judgments-on-the-efficacy-of-inoculation-for-the-prevention-of-smallpox-england-and-new-england-in-the-1700s/ (accessed 13 Apr 2018).
    14. 14.
      1. JurinJ
      .The James Lind Library 1724.2010http://www.jameslindlibrary.org/jurin-j-1724/
    15. 15.
      1. ScheuchzerJG
      .An account of the success of inoculating the small-pox in Great Britain, for the years 1727 and 1728: With a comparison between the mortality of the natural small-pox, and the miscarriages in that practice; as also some general remarks on its progress - The James Lind Library.http://www.jameslindlibrary.org/scheuchzer-jg-1729/ (accessed 13 Apr 2018).
    16. 16.
      1. PearsonK
      .Report on Certain Enteric Fever Inoculation Statistics.Br Med J1904;2:12436.
    17. 17.
      1. ParkWH
      .The treatment of lobar pneumonia with refined specific antibacterial serum.J Am Med Assoc1928;91:15038.doi:10.1001/jama.1928.02700200001001
    18. 18.
      1. O’RourkeK
      .An historical perspective on meta-analysis: dealing quantitatively with varying study results.J R Soc Med2007;100:57982.doi:10.1177/0141076807100012020
    19. 19.
      1. LewisS,
      2. ClarkeM
      .Forest plots: trying to see the wood and the trees.BMJ2001;322:147980.doi:10.1136/bmj.322.7300.1479
    20. 20.
      International Anticoagulant Review Group.Collaborative analysis of long-term anticoagulant administration after acute myocardial infarction. An international anticoagulant review group.Lancet1970;1:2039.
    21. 21.
      1. GoldacreB,
      2. LaneS,
      3. MahtaniKR,et al
      .Pharmaceutical companies' policies on access to trial data, results, and methods: audit study.BMJ2017;358:j3334.doi:10.1136/bmj.j3334
    22. 22.
      1. TierneyJF,
      2. ValeC,
      3. RileyR,et al
      .Individual Participant Data (IPD) meta-analyses of randomised controlled trials: guidance on their use.PLoS Med2015;12:e1001855.doi:10.1371/journal.pmed.1001855
    23. 23.
      1. LeeAW
      .Review of mixed treatment comparisons in published systematic reviews shows marked increase since 2009.J Clin Epidemiol2014;67:13843.doi:10.1016/j.jclinepi.2013.07.014
    24. 24.
      1. RayleighL
      .Address by the Rt. Hon. Lord Rayleigh.In:Report of the fifty-fourth meeting of the british association for the advancement of science,1885:323.

    Footnotes

    • Contributors Both authors contributed equally.

    • Competing interests None declared.

    • Patient consent Not required.

    • Provenance and peer review Commissioned; internally peer reviewed.

    Read the full text or download the PDF: