6 Matching Annotations
  1. Feb 2018
    1. On 2016 Sep 16, Hilda Bastian commented:

      There are many important issues raised in this paper on which I strongly agree with John Ioannidis. There is a lot of research waste in meta-analyses and systematic reviews, and a flood of very low quality, and he points out the contributing factors clearly. However, there are some issues to be aware of in considering the analyses in this paper on the growth of these papers, and their growth in comparison with randomized and other clinical trials.

      Although the author refers to PubMed's "tag" for systematic reviews, there is no tagging process for systematic reviews, as there is for meta-analyses and trials. Although "systematic review" is available as a choice under "article types", that option is a filtered search using Clinical Queries (PubMed Help), not a tagging of publication type. Comparing filtered results to tagged results is not comparing like with like in 2 critical ways.

      Firstly, the proportion of non-systematic reviews in the filter is far higher than the proportion of non-meta-analyses and non-trials in the tagged results. And secondly, full tagging of publication types for MEDLINE/PubMed takes considerable time. When considering a recent year, the gulf between filtered and tagged results widens. For example, as of December 2015 when Ioannidis' searches were done, the tag identified 9,135 meta-analyses. Today (15 September 2016), the same search identifies 11,263. For the type randomized controlled trial, the number tagged increased from 23,133 in December to 29,118 today.

      In the absence of tagging for systematic reviews, the more appropriate comparisons are using filters for both systematic reviews and trials as the base for trends, especially for a year as recent as 2014. Using the Clinical Queries filter for both systematic reviews and therapy trials (broad), for example, shows 34,126 for systematic reviews and 250,195 trials. Page and colleagues estimate there were perhaps 8,000 actual systematic reviews according to a fairly stringent definition (Page MJ, 2016) and the Centre for Reviews and Dissemination added just short of 9,000 systematic reviews to its database in 2014 (PubMed Health). So far, the Cochrane Collaboration has around 38,000 trials in its trials register for 2014 (searching on the word trial in CENTRAL externally).

      The number of systematic reviews/meta-analyses has increased greatly, but not as dramatically as this paper's comparisons suggest, and the data do not tend to support the conclusion in the abstract here that "Currently, probably more systematic reviews of trials than new randomized trials are published annually".

      Ioannidis suggests some bases for some reasonable duplication of systematic reviews - these are descriptive studies, with many subjective choices along the way. However, there is another critical reason that is not raised: the need for updates. This can be by the same group publishing a new version of a systematic review or by others. In areas with substantial questions and considerable ongoing research, multiple reviews are needed.

      I strongly agree with the concerns raised about conflicted systematic reviews. In addition to the issues of manufacturer conflicts, it is important not to underestimate the extent of other kinds of bias (see for example my comment here). Realistically, though, conflicted reviews will continue, building in a need for additional reviewers to tackle the same ground.

      Systematic reviews have found important homes in clinical practice guidelines, health technology assessment, and reimbursement decision-making for both public and private health insurance. But underuse of high quality systematic reviews remains a more significant problem than is addressed here. Even when a systematic review does not identify a strong basis in favor of one option or another, that can still be valuable for decision making - especially in the face of conflicted claims of superiority (and wishful thinking). However, systematic reviews are still not being used enough - especially in shaping subsequent research (see for example Habre C, 2014).

      I agree with Ioannidis that collaborations working prospectively to keep a body of evidence up-to-date is an important direction to go - and it is encouraging that the living cumulative network meta-analysis has arrived (Créquit P, 2016). That direction was also highlighted in Page and Moher's accompanying editorial (Page MJ, 2016). However, I'm not so sure how much of a solution this is going to be. The experience of the Cochrane Collaboration suggests this is even harder than it seems. And consider how excited people were back in 1995 at the groundbreaking publication of the protocol for prospective, collaborative meta-analysis of statin trials (Anonymous, 1995) - and the continuing controversy that swirls, tornado-like, around it today (Godlee, 2016).

      We need higher standards, and skills in critiquing the claims of systematic reviews and meta-analyses need to spread. Meta-analysis factories are a serious problem. But I still think the most critical issues we face are making systematic reviews quicker and more efficient to do, and to use good ones more effectively and thoroughly than we do now (Chalmers I, 2009, Tsafnat G, 2014).

      Disclosure: I work on projects related to systematic reviews at the NCBI (National Center for Biotechnology Information, U.S. National Library of Medicine), including some aspects that relate to the inclusion of systematic reviews in PubMed. I co-authored a paper related to issues raised here several years ago (Bastian H, 2010), and was one of the founding members of the Cochrane Collaboration.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Sep 16, John Ioannidis commented:

      Dear Hilda,

      thank you for the very nice and insightful commentary on my article. I think that my statement "Currently, probably more systematic reviews of trials than new randomized trials are published annually" is probably correct. The quote of 8,000 systematic reviews in the Page et al. 2016 article is using very conservative criteria for systematic reviews and there are many more systematic reviews and meta-analyses, e.g. there is a factory of meta-analyses (even meta-analyses of individual level data) done by the industry combining data of several trials but with no explicit mention of systematic literature search. While many papers may fail to satisfy stringent criteria of being systematic in their searches or other methods, they still carry the title of "systematic reviews" and most readers other than a few methodologists trust them as such. Moreover, the 8,000 quote was from February 2014, i.e. over 2.5 years ago, and systematic reviews' and meta-analyses' publication rates rise geometrically. Conversely, there is no such major increase in the annual rate of published randomized controlled trials. Furthermore, the quote of 38,000 trials in the Cochrane database is misleading, because it includes both randomized and non-randomized trials and the latter may be the majority. Moreover, each randomized controlled trial may have anywhere up to hundreds of secondary publications. On average within less than 5 years of a randomized trial publication, there are 2.5 other secondary publications from the same trial (Ebrahim et al. 2016). Thus the number of published new randomized trials per year is likely to be smaller than the number of published systematic reviews and meta-analyses of randomized trials. Actually, if we also consider the fact that the large majority of randomized trials are small/very small and have little or no impact, while most systematic reviews are routinely surrounded by the awe of the "highest level of evidence", one might even say that the number of systematic reviews of trials published in 2016 is likely to be several times larger than the number of sizable randomized trials published in the same time frame.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Nov 04, Matthew Romo commented:

      Thank you for this very thought provoking paper. With the skyrocketing amount of systematic reviews (and meta-analyses) published, I wonder how many did not identify any evidence for their research question. If systematic reviews are research, shouldn’t we expect null results, at least once in a while? Quantifying the relative number of systematic reviews with null results (which seem to be very few) might be helpful in further understanding the degree of bias there is in published systematic reviews. After all, research should be published based on the importance of the question they seek to answer and their methodological soundness, rather than their results (Greenwald, 1993).

      "Null" systematic reviews that find no evidence can be very informative for researchers, clinicians, and patients, provided that the systematic review authors leave no stone unturned in their search, as they ought to for any systematic review. For researchers, they scientifically identify important gaps in knowledge where future research is needed. For clinicians and patients, they can provide an understanding of practices that don’t have a reliable evidence base. As stated quite appropriately by Alderson and Roberts in 2000, “we should be willing to admit that ‘we don’t know’ so the evidential base of health care can be improved for future generation.”

      Matthew Romo, PharmD, MPH Graduate School of Public Health and Health Policy, City University of New York

      Alderson P, Roberts I. Should journals publish systematic reviews that find no evidence to guide practice? Examples from injury research. BMJ. 2000;320:376-377.

      Greenwald AG. Consequences of prejudice against the null hypothesis. In: A Handbook for Data Analysis in the Behavioural Sciences, edited by Keren G, Lewis C, Hillsdale, NJ, Lawrence Erlbaum, 1993, pp419–448.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2016 Nov 22, Natalie Parletta commented:

      Good point re null results less likely to be published Matthew Romo. I think we also need to consider that in some instances there is a vested interest in publishing null findings, such as the systematic review on omega-3 fatty acids and cardiovascular disease (BMJ 2006; 332 doi: http://dx.doi.org/10.1136/bmj.38755.366331.2F) which did not include positive studies before 2000 (which had led to recommendations to eat fish/take fish oil for CVD) and has been critiqued for serious methodological flaws (https://www.cambridge.org/core/journals/british-journal-of-nutrition/article/pitfalls-in-the-use-of-randomised-controlled-trials-for-fish-oil-studies-with-cardiac-patients/65DDE2BD0B260D1CF942D1FF9D903239; http://www.issfal.org/statements/hooper-rebuttable). Incidentally I learned that the journal that published one of the null studies sold 900,000 reprints to a pharmaceutical company (that presumably sells statins).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    5. On 2016 Dec 10, Arnaud Chiolero MD PhD commented:

      These findings were, unfortunately, expected. They suggest that systematic reviews should not be always regarded as the highest level of evidence; it is evident that they can be seriously biased - like any other studies.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    6. On 2017 May 31, Sergio Uribe commented:

      Timely and necessary reflection.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.