18 Matching Annotations
  1. Jul 2018
    1. On 2017 May 31, Sergio Uribe commented:

      Timely and necessary reflection.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Dec 10, Arnaud Chiolero MD PhD commented:

      These findings were, unfortunately, expected. They suggest that systematic reviews should not be always regarded as the highest level of evidence; it is evident that they can be seriously biased - like any other studies.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Nov 22, Natalie Parletta commented:

      Good point re null results less likely to be published Matthew Romo. I think we also need to consider that in some instances there is a vested interest in publishing null findings, such as the systematic review on omega-3 fatty acids and cardiovascular disease (BMJ 2006; 332 doi: http://dx.doi.org/10.1136/bmj.38755.366331.2F) which did not include positive studies before 2000 (which had led to recommendations to eat fish/take fish oil for CVD) and has been critiqued for serious methodological flaws (https://www.cambridge.org/core/journals/british-journal-of-nutrition/article/pitfalls-in-the-use-of-randomised-controlled-trials-for-fish-oil-studies-with-cardiac-patients/65DDE2BD0B260D1CF942D1FF9D903239; http://www.issfal.org/statements/hooper-rebuttable). Incidentally I learned that the journal that published one of the null studies sold 900,000 reprints to a pharmaceutical company (that presumably sells statins).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2016 Nov 04, Matthew Romo commented:

      Thank you for this very thought provoking paper. With the skyrocketing amount of systematic reviews (and meta-analyses) published, I wonder how many did not identify any evidence for their research question. If systematic reviews are research, shouldn’t we expect null results, at least once in a while? Quantifying the relative number of systematic reviews with null results (which seem to be very few) might be helpful in further understanding the degree of bias there is in published systematic reviews. After all, research should be published based on the importance of the question they seek to answer and their methodological soundness, rather than their results (Greenwald, 1993).

      "Null" systematic reviews that find no evidence can be very informative for researchers, clinicians, and patients, provided that the systematic review authors leave no stone unturned in their search, as they ought to for any systematic review. For researchers, they scientifically identify important gaps in knowledge where future research is needed. For clinicians and patients, they can provide an understanding of practices that don’t have a reliable evidence base. As stated quite appropriately by Alderson and Roberts in 2000, “we should be willing to admit that ‘we don’t know’ so the evidential base of health care can be improved for future generation.”

      Matthew Romo, PharmD, MPH Graduate School of Public Health and Health Policy, City University of New York

      Alderson P, Roberts I. Should journals publish systematic reviews that find no evidence to guide practice? Examples from injury research. BMJ. 2000;320:376-377.

      Greenwald AG. Consequences of prejudice against the null hypothesis. In: A Handbook for Data Analysis in the Behavioural Sciences, edited by Keren G, Lewis C, Hillsdale, NJ, Lawrence Erlbaum, 1993, pp419–448.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    5. On 2016 Dec 30, Arturo Martí-Carvajal commented:

      What is the degree of responsibility of either editor-in-chief of journal or peer reviewers in the publication of systematic reviews with doubtful quality?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    6. On 2016 Sep 18, Hilda Bastian commented:

      Thanks, John - that's as close we'll get, and we do agree on far more than we disagree, as ever.

      I agree we should face the data, and be meticulous about it. I just don't agree that indexing has the same effect on a tagged category as it has for a filter: especially not when the filter is so broad that it encompasses the variety of terms people use to describe their work. I remain convinced that the appropriate time trend comparators are filter to filter, with triangulation of sources. I don't think it's highly likely that 90% of the RCTs are in the first 35% of tagged literature.

      I don't think people should hold off publishing a systematic review that was done before deciding to fund or run a trial, until a report of the trial or its methods is published - and ideally, they would be done by different people. Intellectual conflicts of interest can be as powerful as any other. And I don't think that trialists interpreting what their trial means in the context of other evidence meets the criterion, unconflicted. Nor do I think the only systematic reviews we need are those with RCTs.

      I don't think Cochrane reviews are all good quality and unconflicted - in fact, the example of a conflicted review with quality issues in my comment was a Cochrane review. I agree there is no prestigious name that guarantees quality. (It's a long time since I left the Cochrane Collaboration, by the way.) My comments aren't because I disagree that there is a flood of bad quality "systematic" reviews and meta-analyses: the title of your article is one of the many things I agree with. See for example here, here, and quite a few of my comments on PubMed Commons.

      But the main reason for this reply is to add into this stream the reason I feel some grounds for optimism about something else we would both fervently agree on: the need to chip away at the problem of extensive under-reporting of clinical trials. As of January 2017, the mechanisms and incentives for reporting a large chunk of trials - those funded by NIH and affected by the FDA's scope - will change (NIH, 2016). Regardless of what happens with synthesis studies, any substantial uptick in trial reporting would be great news.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    7. On 2016 Sep 18, John Ioannidis commented:

      Dear Hilda,

      thank you for all these wise thoughts. Based on prior experience, at this point in time (mid-September) the numbers for "Type of Article" meta-analysis, systematic reviews and randomized controlled trial for 2015 are likely to increase by about 10% with more complete indexing. I have taken this into account in my calculations.

      I fully agree with Iain Chalmers that every trial should start and finish with a systematic review. I fervently defend upfront this concept in my paper when I say that "it is irrational not to systematically review what is already known before deciding to perform any new study. Moreover, once a new study is completed, it is useful to update the cumulative evidence", even specifically citing Iain's work. But the publication of these systematic reviews are (and should be) integral with the publication of the specific new studies. I have not counted separately the systematic reviews that are embedded within trial publications. If I were to do this, then the numbers of systematic reviews would be even higher. My proposal even goes a step further in arguing that systematic reviews and meta-analyses should be even more tightly integrated with the primary studies. Meta-analyses should become THE primary studies par excellence.

      So, the answer to your question "But in an ideal world, isn't a greater number of systematic reviews than RCTs just the way it should be?" the answer is clearly "No", if we are taking about the dominant paradigm of systematic reviews of low quality that are done in isolation of the primary evidence and represent a parallel universe serving mostly its own conflicts. The vast majority of the currently published systematic reviews are not high-quality, meticulous efforts, e.g. Cochrane reviews, and they are entirely disjoint from primary studies. Cochrane reviews represent unfortunately less than 5% of this massive production. While I see that you and some other Cochrane friends have felt uneasy with the title of my paper and this has resulted in some friendly fire, I ask you to please look more carefully at this threatening pandemic which is evolving in the systematic review and meta-analysis world. Even though I trust that Cochrane is characterized by well-intentioned, non-conflicted and meticulous efforts, this bubble, which is 20-50 times larger than Cochrane, is growing next door. Let us please face the data, recognize this major problem and not try to defend ANY systematic reviews and meta-analyses as if they have value no matter what just because they happen to carry such a prestigious name.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    8. On 2016 Sep 18, Hilda Bastian commented:

      Thanks, John, for taking this so seriously - that's extremely helpful and I certainly agree with you that the rate of publication of systematic reviews is growing faster than RCTs. So the point you are talking about may well be reached at some point. Unless the rate of growth equalizes, and unless the rate of RCTs that are unpublished drops substantially: and both of those remain possible.

      This comparison is much better, but it can't solve the underlying issues. Human indexing resources did not increase exponentially along with the exponential increase of literature. As of today, searching for PubMed records with 2015 in the PubMed entry date [EDAT], only 35% also have a 2015 date for completed indexing [DCOM] (which from PubMed Help looks to me the way you would check for that - but an information specialist may correct me here). That's roughly what I would expect to see: individually indexing well over a million records a year is a colossal undertaking. Being finished 2015 in just a few months while 2016 priorities are pouring in would be amazing. And we know that no process of prioritizing journals will solve this problem for trials, because the scatter across journals is so great (Hoffmann T, 2012).

      So any comparison between a tagged set (RCTs) and a search based on a filter with text words (which includes systematic review or meta-analysis in the title or abstract), could generate potentially very biased estimates, no matter how carefully the results are analyzed. And good systematic reviews of non-randomized clinical trials, and indeed, other methodologies - such as systematic reviews of adverse events, qualitative studies, and more - are valuable too. Many systematic reviews would be "empty" of RCTs, but that doesn't make them useless by definition.

      I couldn't agree with you more enthusiastically, though, that we still need more, not fewer, well-done RCTs, systematic reviews, and meta-analyses by non-conflicted scientists. I do add a caveat though, when it comes to RCTs. RCTs are human experimentation. It is not just that they are resource-intensive: unnecessary RCTs and some of the ways that RCTs can be "bad", can cause direct harm to participants, in a way that an unnecessary systematic review cannot. The constraints on RCTs are greater: so they need to be done on questions that matter the most and where they can genuinely provide better information. If good enough information can come from systematically reviewing other types of research, then that's a better use of scarce resources. And if only so many RCTs can be done, then we need to be sure we do the "right" ones.

      For over 20 years, Iain Chalmers has argued that an RCT should not be done without a systematic review to show the RCT is justified - and there should be an update afterwards. Six years ago - he, Mike Clarke and Sally Hopewell concluded that we were nowhere near achieving that (Clarke M, 2010). The point you make about the waste in systematic reviewing underscores that point, too. But in the ideal world, isn't a greater number of systematic reviews than RCTs just the way it should be?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    9. On 2016 Sep 17, John Ioannidis commented:

      Dear Hilda,

      Thank you for your follow-up comment on my reply, I always cherish your insights. I tried to get a more direct answer to the question on which we both have some residual uncertainty, i.e. whether currently published systematic reviews of trials outnumber new randomized controlled trials. So, I collected more data.

      First, while we can disagree on some minor technical details, it is very clear that the annual rate has been increasing extremely fast for “meta-analyses” and very fast for “systematic reviews”, while it is rising slowly for “randomized controlled trials” types of articles. In a search as of today, the numbers per year between 2009 and 2015 using the “type of article” searches (with all their limitations) are 3243-3934-4858-6570-8192-9632-9745 for meta-analysis, 15085-17353-19378-22575-25642-29261-31609 for systematic reviews and 17879-18907-20451-22339-24538-24459-22066 for randomized controlled trials. The data are not fully complete for 2015 given that “type of article” assignments may have some delay, but comparing 2014 versus 2009 where the data are unlikely to change meaningfully with more tags, within 5-years the rate of publication of meta-analyses tripled, the rate of publication of systematic reviews doubled, while the rate of publication of randomized trials increased by only 36% (almost perfectly tracking the 33% growth of total PubMed items in the same period).

      Type of article is of course not perfectly sensitive or specific in searching. So, I took a more in-depth look in a sample of 111 articles that are Type of article=“randomized controlled trial” among the 22066 published in 2015 (in the order of being retrieved by a 2015 [DP] search selecting the first and every 200th afterwards, i.e. 1, 201, 401, etc). Of the 111, 17 represent secondary analyses (the majority of secondary analyses of RCTs are not tagged as “randomized controlled trial”), 5 are protocols without results, 6 are non-human randomized studies (on cattle, barramundi etc), and 12 are not randomized trials, leaving a maximum of 71 new randomized controlled trials. I say “maximum”, because some of those 71 may actually not be randomized (e.g. there is a substantial number of “randomized” trials from China and past in-depth evaluations have shown that many/most are not really randomized even they say they are) and some others may also be secondary or duplicate publications but this is not easy to decipher based on this isolated sampling. Even if 71/111 are new RCTs, this translates to (71/111)x22060=14114 new RCTs (or articles masquerading as new RCTs) in 2015. Allowing for some missed RCTs and not yet tagged ones, it is possible that the number of new RCTs published currently is in the range of 15,000 per year. Of the 71 studies that were new RCTs or masquerading as such, only 25 had over 100 randomized participants and only 1 had over 1000 randomized participants. Clinically informative RCTs are sadly very few.

      I also examined the studies tagged as Type of Article “meta-analysis” or “systematic review” or “review” published in 2015 [DP], combined with (trial* OR treatment* OR randomi*). Of the 49,166 items, I selected 84 for in-depth scrutiny (the first and every 600th afterwards, i.e. 1, 601, 1201, etc). Overall, 30 of the 84 were systematic reviews and/or meta-analyses of trials or might be masquerading as such to the average reader, i.e. had some allusion to search databases and/or search strategies and/or systematic tabulation of information. None of these 30 are affected by any of the potential caveats you raised (protocols, ACP Journal Club, split reviews, etc). Extrapolating to the total 49166, one estimates 17988 systematic reviews and/or meta-analyses of trials (or masquerading as such) in 2015. Again, allowing for missed items (e.g. pooled analyses of multiple trials conducted by the industry are not tagged as such Types of Articles), for those not yet tagged, and for a more rapid growth for such studies in 2016 than for RCTs, it is likely that the number of systematic reviews and/or meta-analyses of trials published currently is approaching 20,000 per year. If the criteria for “systematic review of trials” become more stringent (as in Page et al, 2016), this number will be substantially smaller, but will still be quite competitive against the number of new RCTs. Of course, if we focus on both stringent criteria and high quality, the numbers drop precipitously, as it happens also with RCTs.

      I am sure that these analyses can be done in more detail. However, the main message is unlikely to change. There is a factory of RCTs and a far more rapidly expanding factory of systematic reviews and meta-analyses. The majority of the products of both factories are useless, conflicted, misleading or all of the above. The same applies to systematic reviews and meta-analyses for most other types of study designs in biomedical research. This does not mean that RCTs, systematic reviews, and meta-analyses are not a superb idea. If well done by non-conflicted scientists, they can provide the best evidence. We need more, not fewer, such studies that are well done and non-conflicted.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    10. On 2016 Sep 16, Hilda Bastian commented:

      Thanks, John, for the reply - and for giving us all so much to think about, as usual!

      I agree that there are meta-analyses without systematic reviews, but the tagged meta-analyses are included in the filter you used: they are not additional (NLM, 2016). It also includes meta-analysis in the title, guidelines, validation studies, and multiple other terms that add non-systematic reviews, and even non-reviews, to the results.

      In Ebrahim S, 2016, 191 primary trials in only high impact journals were studied. Whether they are typical of all trials is not clear: it seems unlikely that they are. Either way, hundreds of reports for a single trial is far from common: half the trials in that sample had no secondary publications, only 8 had more more than 10, and none had more than 54. Multiple publications from a single trial can sometimes be on quite different questions, which might also need to be addressed in different systematic reviews.

      The number of trials has not been increasing as fast as the number of systematic reviews, but the number has not reached a definite ongoing plateau either. I have posted an October 2015 update to the data using multiple ways to assess these trends in the paper by me, Paul Glasziou, and Iain Chalmers from 2010 (Bastian H, 2010) here. Trials have tended to fluctuate a little from year to year, but the overall trend is growth. As the obligation to report trials grows more stringent, the trend in publication may be materially affected.

      Meanwhile, "systematic reviews" in the filter you used have not risen all that dramatically since February 2014. For the whole of 2014, there were 34,126 and in 2015 there were 36,017 (with 19,538 in the first half of 2016). It is not clear without detailed analysis what part of the collection of types of paper are responsible for that increase. The method used to support the conclusion here about systematic reviews of trials overtaking trials themselves was to restrict the systematic review filter to those mentioning trials or treatment - “trial* OR randomi* OR treatment*”. That does not mean the review is of randomized trials only: no randomized trial need be involved at all, and it doesn't have to be a review.

      Certainly, if you set the number of sizable randomized trials high, there will be fewer of them than of all possible types of systematic review: but then, there might not be all that many very sizable, genuinely systematic reviews either - and not all systematic reviews are influential (or even noticed). And yes, there are reviews that are called systematic that aren't: but there are RCTs called randomized that aren't as well. What's more, an important response to the arrival of a sizeable RCT may well be an updated systematic review.

      Double reports of systematic reviews are fairly common in the filter you used too, although far from half - and not more than 10. Still, the filter will be picking up protocols as well as their subsequent reviews, systematic reviews in both the article version and coverage in ACP Journal Club, the full text of systematic reviews via PubMed Health and their journal versions (and the ACP Journal Club coverage too), individual patient data analyses based on other systematic reviews, and splitting a single systematic review into multiple publications. The biggest issue remains, though, that as it is such a broad filter, casting its net so very wide across the evidence field, it's not an appropriate comparator for tagged sets, especially not in recent years.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    11. On 2016 Sep 16, John Ioannidis commented:

      Dear Hilda,

      thank you for the very nice and insightful commentary on my article. I think that my statement "Currently, probably more systematic reviews of trials than new randomized trials are published annually" is probably correct. The quote of 8,000 systematic reviews in the Page et al. 2016 article is using very conservative criteria for systematic reviews and there are many more systematic reviews and meta-analyses, e.g. there is a factory of meta-analyses (even meta-analyses of individual level data) done by the industry combining data of several trials but with no explicit mention of systematic literature search. While many papers may fail to satisfy stringent criteria of being systematic in their searches or other methods, they still carry the title of "systematic reviews" and most readers other than a few methodologists trust them as such. Moreover, the 8,000 quote was from February 2014, i.e. over 2.5 years ago, and systematic reviews' and meta-analyses' publication rates rise geometrically. Conversely, there is no such major increase in the annual rate of published randomized controlled trials. Furthermore, the quote of 38,000 trials in the Cochrane database is misleading, because it includes both randomized and non-randomized trials and the latter may be the majority. Moreover, each randomized controlled trial may have anywhere up to hundreds of secondary publications. On average within less than 5 years of a randomized trial publication, there are 2.5 other secondary publications from the same trial (Ebrahim et al. 2016). Thus the number of published new randomized trials per year is likely to be smaller than the number of published systematic reviews and meta-analyses of randomized trials. Actually, if we also consider the fact that the large majority of randomized trials are small/very small and have little or no impact, while most systematic reviews are routinely surrounded by the awe of the "highest level of evidence", one might even say that the number of systematic reviews of trials published in 2016 is likely to be several times larger than the number of sizable randomized trials published in the same time frame.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    12. On 2016 Sep 16, Hilda Bastian commented:

      There are many important issues raised in this paper on which I strongly agree with John Ioannidis. There is a lot of research waste in meta-analyses and systematic reviews, and a flood of very low quality, and he points out the contributing factors clearly. However, there are some issues to be aware of in considering the analyses in this paper on the growth of these papers, and their growth in comparison with randomized and other clinical trials.

      Although the author refers to PubMed's "tag" for systematic reviews, there is no tagging process for systematic reviews, as there is for meta-analyses and trials. Although "systematic review" is available as a choice under "article types", that option is a filtered search using Clinical Queries (PubMed Help), not a tagging of publication type. Comparing filtered results to tagged results is not comparing like with like in 2 critical ways.

      Firstly, the proportion of non-systematic reviews in the filter is far higher than the proportion of non-meta-analyses and non-trials in the tagged results. And secondly, full tagging of publication types for MEDLINE/PubMed takes considerable time. When considering a recent year, the gulf between filtered and tagged results widens. For example, as of December 2015 when Ioannidis' searches were done, the tag identified 9,135 meta-analyses. Today (15 September 2016), the same search identifies 11,263. For the type randomized controlled trial, the number tagged increased from 23,133 in December to 29,118 today.

      In the absence of tagging for systematic reviews, the more appropriate comparisons are using filters for both systematic reviews and trials as the base for trends, especially for a year as recent as 2014. Using the Clinical Queries filter for both systematic reviews and therapy trials (broad), for example, shows 34,126 for systematic reviews and 250,195 trials. Page and colleagues estimate there were perhaps 8,000 actual systematic reviews according to a fairly stringent definition (Page MJ, 2016) and the Centre for Reviews and Dissemination added just short of 9,000 systematic reviews to its database in 2014 (PubMed Health). So far, the Cochrane Collaboration has around 38,000 trials in its trials register for 2014 (searching on the word trial in CENTRAL externally).

      The number of systematic reviews/meta-analyses has increased greatly, but not as dramatically as this paper's comparisons suggest, and the data do not tend to support the conclusion in the abstract here that "Currently, probably more systematic reviews of trials than new randomized trials are published annually".

      Ioannidis suggests some bases for some reasonable duplication of systematic reviews - these are descriptive studies, with many subjective choices along the way. However, there is another critical reason that is not raised: the need for updates. This can be by the same group publishing a new version of a systematic review or by others. In areas with substantial questions and considerable ongoing research, multiple reviews are needed.

      I strongly agree with the concerns raised about conflicted systematic reviews. In addition to the issues of manufacturer conflicts, it is important not to underestimate the extent of other kinds of bias (see for example my comment here). Realistically, though, conflicted reviews will continue, building in a need for additional reviewers to tackle the same ground.

      Systematic reviews have found important homes in clinical practice guidelines, health technology assessment, and reimbursement decision-making for both public and private health insurance. But underuse of high quality systematic reviews remains a more significant problem than is addressed here. Even when a systematic review does not identify a strong basis in favor of one option or another, that can still be valuable for decision making - especially in the face of conflicted claims of superiority (and wishful thinking). However, systematic reviews are still not being used enough - especially in shaping subsequent research (see for example Habre C, 2014).

      I agree with Ioannidis that collaborations working prospectively to keep a body of evidence up-to-date is an important direction to go - and it is encouraging that the living cumulative network meta-analysis has arrived (Créquit P, 2016). That direction was also highlighted in Page and Moher's accompanying editorial (Page MJ, 2016). However, I'm not so sure how much of a solution this is going to be. The experience of the Cochrane Collaboration suggests this is even harder than it seems. And consider how excited people were back in 1995 at the groundbreaking publication of the protocol for prospective, collaborative meta-analysis of statin trials (Anonymous, 1995) - and the continuing controversy that swirls, tornado-like, around it today (Godlee, 2016).

      We need higher standards, and skills in critiquing the claims of systematic reviews and meta-analyses need to spread. Meta-analysis factories are a serious problem. But I still think the most critical issues we face are making systematic reviews quicker and more efficient to do, and to use good ones more effectively and thoroughly than we do now (Chalmers I, 2009, Tsafnat G, 2014).

      Disclosure: I work on projects related to systematic reviews at the NCBI (National Center for Biotechnology Information, U.S. National Library of Medicine), including some aspects that relate to the inclusion of systematic reviews in PubMed. I co-authored a paper related to issues raised here several years ago (Bastian H, 2010), and was one of the founding members of the Cochrane Collaboration.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

  2. Feb 2018
    1. On 2016 Sep 16, Hilda Bastian commented:

      There are many important issues raised in this paper on which I strongly agree with John Ioannidis. There is a lot of research waste in meta-analyses and systematic reviews, and a flood of very low quality, and he points out the contributing factors clearly. However, there are some issues to be aware of in considering the analyses in this paper on the growth of these papers, and their growth in comparison with randomized and other clinical trials.

      Although the author refers to PubMed's "tag" for systematic reviews, there is no tagging process for systematic reviews, as there is for meta-analyses and trials. Although "systematic review" is available as a choice under "article types", that option is a filtered search using Clinical Queries (PubMed Help), not a tagging of publication type. Comparing filtered results to tagged results is not comparing like with like in 2 critical ways.

      Firstly, the proportion of non-systematic reviews in the filter is far higher than the proportion of non-meta-analyses and non-trials in the tagged results. And secondly, full tagging of publication types for MEDLINE/PubMed takes considerable time. When considering a recent year, the gulf between filtered and tagged results widens. For example, as of December 2015 when Ioannidis' searches were done, the tag identified 9,135 meta-analyses. Today (15 September 2016), the same search identifies 11,263. For the type randomized controlled trial, the number tagged increased from 23,133 in December to 29,118 today.

      In the absence of tagging for systematic reviews, the more appropriate comparisons are using filters for both systematic reviews and trials as the base for trends, especially for a year as recent as 2014. Using the Clinical Queries filter for both systematic reviews and therapy trials (broad), for example, shows 34,126 for systematic reviews and 250,195 trials. Page and colleagues estimate there were perhaps 8,000 actual systematic reviews according to a fairly stringent definition (Page MJ, 2016) and the Centre for Reviews and Dissemination added just short of 9,000 systematic reviews to its database in 2014 (PubMed Health). So far, the Cochrane Collaboration has around 38,000 trials in its trials register for 2014 (searching on the word trial in CENTRAL externally).

      The number of systematic reviews/meta-analyses has increased greatly, but not as dramatically as this paper's comparisons suggest, and the data do not tend to support the conclusion in the abstract here that "Currently, probably more systematic reviews of trials than new randomized trials are published annually".

      Ioannidis suggests some bases for some reasonable duplication of systematic reviews - these are descriptive studies, with many subjective choices along the way. However, there is another critical reason that is not raised: the need for updates. This can be by the same group publishing a new version of a systematic review or by others. In areas with substantial questions and considerable ongoing research, multiple reviews are needed.

      I strongly agree with the concerns raised about conflicted systematic reviews. In addition to the issues of manufacturer conflicts, it is important not to underestimate the extent of other kinds of bias (see for example my comment here). Realistically, though, conflicted reviews will continue, building in a need for additional reviewers to tackle the same ground.

      Systematic reviews have found important homes in clinical practice guidelines, health technology assessment, and reimbursement decision-making for both public and private health insurance. But underuse of high quality systematic reviews remains a more significant problem than is addressed here. Even when a systematic review does not identify a strong basis in favor of one option or another, that can still be valuable for decision making - especially in the face of conflicted claims of superiority (and wishful thinking). However, systematic reviews are still not being used enough - especially in shaping subsequent research (see for example Habre C, 2014).

      I agree with Ioannidis that collaborations working prospectively to keep a body of evidence up-to-date is an important direction to go - and it is encouraging that the living cumulative network meta-analysis has arrived (Créquit P, 2016). That direction was also highlighted in Page and Moher's accompanying editorial (Page MJ, 2016). However, I'm not so sure how much of a solution this is going to be. The experience of the Cochrane Collaboration suggests this is even harder than it seems. And consider how excited people were back in 1995 at the groundbreaking publication of the protocol for prospective, collaborative meta-analysis of statin trials (Anonymous, 1995) - and the continuing controversy that swirls, tornado-like, around it today (Godlee, 2016).

      We need higher standards, and skills in critiquing the claims of systematic reviews and meta-analyses need to spread. Meta-analysis factories are a serious problem. But I still think the most critical issues we face are making systematic reviews quicker and more efficient to do, and to use good ones more effectively and thoroughly than we do now (Chalmers I, 2009, Tsafnat G, 2014).

      Disclosure: I work on projects related to systematic reviews at the NCBI (National Center for Biotechnology Information, U.S. National Library of Medicine), including some aspects that relate to the inclusion of systematic reviews in PubMed. I co-authored a paper related to issues raised here several years ago (Bastian H, 2010), and was one of the founding members of the Cochrane Collaboration.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Sep 16, John Ioannidis commented:

      Dear Hilda,

      thank you for the very nice and insightful commentary on my article. I think that my statement "Currently, probably more systematic reviews of trials than new randomized trials are published annually" is probably correct. The quote of 8,000 systematic reviews in the Page et al. 2016 article is using very conservative criteria for systematic reviews and there are many more systematic reviews and meta-analyses, e.g. there is a factory of meta-analyses (even meta-analyses of individual level data) done by the industry combining data of several trials but with no explicit mention of systematic literature search. While many papers may fail to satisfy stringent criteria of being systematic in their searches or other methods, they still carry the title of "systematic reviews" and most readers other than a few methodologists trust them as such. Moreover, the 8,000 quote was from February 2014, i.e. over 2.5 years ago, and systematic reviews' and meta-analyses' publication rates rise geometrically. Conversely, there is no such major increase in the annual rate of published randomized controlled trials. Furthermore, the quote of 38,000 trials in the Cochrane database is misleading, because it includes both randomized and non-randomized trials and the latter may be the majority. Moreover, each randomized controlled trial may have anywhere up to hundreds of secondary publications. On average within less than 5 years of a randomized trial publication, there are 2.5 other secondary publications from the same trial (Ebrahim et al. 2016). Thus the number of published new randomized trials per year is likely to be smaller than the number of published systematic reviews and meta-analyses of randomized trials. Actually, if we also consider the fact that the large majority of randomized trials are small/very small and have little or no impact, while most systematic reviews are routinely surrounded by the awe of the "highest level of evidence", one might even say that the number of systematic reviews of trials published in 2016 is likely to be several times larger than the number of sizable randomized trials published in the same time frame.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Nov 04, Matthew Romo commented:

      Thank you for this very thought provoking paper. With the skyrocketing amount of systematic reviews (and meta-analyses) published, I wonder how many did not identify any evidence for their research question. If systematic reviews are research, shouldn’t we expect null results, at least once in a while? Quantifying the relative number of systematic reviews with null results (which seem to be very few) might be helpful in further understanding the degree of bias there is in published systematic reviews. After all, research should be published based on the importance of the question they seek to answer and their methodological soundness, rather than their results (Greenwald, 1993).

      "Null" systematic reviews that find no evidence can be very informative for researchers, clinicians, and patients, provided that the systematic review authors leave no stone unturned in their search, as they ought to for any systematic review. For researchers, they scientifically identify important gaps in knowledge where future research is needed. For clinicians and patients, they can provide an understanding of practices that don’t have a reliable evidence base. As stated quite appropriately by Alderson and Roberts in 2000, “we should be willing to admit that ‘we don’t know’ so the evidential base of health care can be improved for future generation.”

      Matthew Romo, PharmD, MPH Graduate School of Public Health and Health Policy, City University of New York

      Alderson P, Roberts I. Should journals publish systematic reviews that find no evidence to guide practice? Examples from injury research. BMJ. 2000;320:376-377.

      Greenwald AG. Consequences of prejudice against the null hypothesis. In: A Handbook for Data Analysis in the Behavioural Sciences, edited by Keren G, Lewis C, Hillsdale, NJ, Lawrence Erlbaum, 1993, pp419–448.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2016 Nov 22, Natalie Parletta commented:

      Good point re null results less likely to be published Matthew Romo. I think we also need to consider that in some instances there is a vested interest in publishing null findings, such as the systematic review on omega-3 fatty acids and cardiovascular disease (BMJ 2006; 332 doi: http://dx.doi.org/10.1136/bmj.38755.366331.2F) which did not include positive studies before 2000 (which had led to recommendations to eat fish/take fish oil for CVD) and has been critiqued for serious methodological flaws (https://www.cambridge.org/core/journals/british-journal-of-nutrition/article/pitfalls-in-the-use-of-randomised-controlled-trials-for-fish-oil-studies-with-cardiac-patients/65DDE2BD0B260D1CF942D1FF9D903239; http://www.issfal.org/statements/hooper-rebuttable). Incidentally I learned that the journal that published one of the null studies sold 900,000 reprints to a pharmaceutical company (that presumably sells statins).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    5. On 2016 Dec 10, Arnaud Chiolero MD PhD commented:

      These findings were, unfortunately, expected. They suggest that systematic reviews should not be always regarded as the highest level of evidence; it is evident that they can be seriously biased - like any other studies.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    6. On 2017 May 31, Sergio Uribe commented:

      Timely and necessary reflection.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.