On 2016 Jan 14, Arturo Casadevall commented:
The central criticism is that we have compared variables for which there is no causal relationship. We recognize the difficulties involved in assuming causality and dangers of spurious correlations when plotting unrelated variables. Furthermore, we are fully aware that correlation is not causation. However, the criticism made by Levine and Weinstein does not take into account a large body of published scholarly work showing that spending of public funds translates into medical goods such as new therapeutics. To make this point, we note the findings of several studies. In 2000, a United States Senate Report found that of the 21 most important drugs introduced between 1965-1992, 15 (71%) ‘were developed using knowledge and techniques from federally funded research’ (http://www.faseb.org/portals/2/pdfs/opa/2008/nih_research_benefits.pdf). A recent study of 26 transformative drugs or drug classes found that for many, their discovery was made with governmental support [1]. Numerous other studies have reinforced this point [2-4]. Kohout-Blume estimated that a 10% increase in targeted funding for specific diseases produced a 4.5% increase in the number of drugs reaching clinical trials after an average lag of 12 years [5]. In our own investigations, we have traced the ancestry of most of the drugs licensed in the past four decades to publicly funded research (unpublished data). The literature in this field overwhelmingly supports the notion that public spending in biomedical research translates into public goods. The debate is not about whether this happens but rather about the magnitude of the effect. The notion that public funding in biomedical research generates basic knowledge that is subsequently used in drug development is a concept accepted by most authorities. Hence, the use of the NIH budget as a proxy for public spending in biomedical research is appropriate.
We are aware that establishing causal relationships among non-experimental variables can be a daunting task. However, we note that the relationship between public spending and medical goods does meet some of the essential criteria needed to establish causality. First, the relationship between these variables meets the requirement of temporal causality since, for many drugs, publicly funded basic research precedes drug development. Second, we also have mechanistic causality, since knowledge from basic research is used in designing drugs. There are numerous examples of mechanistic causality including the finding of receptors with public funds that are then exploited in drug development when industry generates an agonist or inhibitor. We acknowledge that we do not know if the relationship between public spending and drug development is linear, and the precise mathematical formulation for how public spending translates into medical goods is unknown. In the absence of evidence for a more complex relationship, a linear relationship is a reasonable first approximation, and we note that other authorities have also assumed linear relationships in analyzing inputs and outcomes in the pharmaceutical industry. For example, Scannell et al. [6] used a similar analysis to make the point that ‘The number of new drugs approved by the US Food and Drug Administration (FDA) per billion US dollars (inflation-adjusted) spent on research and development (R&D) has halved roughly every 9 years’.
The authors claim to have done a causality analysis of the data generated in our paper, concluding that ‘We do not find evidence that NIH budget ⇒ NME (p=0.475), and thus it may not be a good indicator of biomedical research efficiency.’ However, this oversimplifies a very complex process of how public spending affects NME development; we do not agree that this simple analysis can be used to deny causality. Although the limited information provided in their comment does not permit a detailed rebuttal, we note that a failure to reject the Granger causality null hypothesis does not necessarily indicate the absence of causality. Furthermore, Granger causality refers to the ability of one variable to improve predictions of the future values of a second variable, which is distinct from the philosophical definition of causality. Whether or not NIH budget history adds predictive ability in determining the number of NMEs approved at some point in the future cannot negate the fact that basic biomedical research funding unequivocally influences the creation of future drugs, as well as many other outcomes. Therefore, we stand by our use of NIH funding and NMEs as indicators of biomedical research inputs and outcomes.
The authors suggest that another study by Rzhetsky et al. [7] contradicts the findings of our paper and provides a better method of studying biomedical research efficiency. The work by Rzhetsky et al., while very interesting, addresses a fundamentally different question relating to how scientists can most efficiently choose research topics to explore a knowledge network [7]. The allocation of scientists to research topics is undoubtedly a possible contributor to overall research efficiency, but the approach used in this analysis is very different from our broader analysis of the biomedical research enterprise as a whole. The work in [7] has a narrow scope and does not attempt to study the impact of research investments in producing societal outcomes. The central conclusion of our paper is that biomedical research inputs and outputs are increasing much faster than outcomes, as measured by NMEs and LE.
We do not ‘conjecture that a lack relevance or rigor in biomedical research’ is solely responsible for this phenomenon, as Levine and Weinstein assert. Instead, our paper discusses a number of possible explanations—many of which have been previously identified in the literature [6-12], including several that agree with the conclusions of Rzhetsky et al. [7]. However, the recent epidemic of retracted papers along with growing concerns about the reproducibility of biomedical studies, expressed in part by pharmaceutical companies dedicated to the discovery of NMEs [13, 14], are indisputable facts. If a substantial portion of basic science findings are unreliable, this is likely to contribute to reduced productivity of the research enterprise. We agree with the suggestion that research difficulty increases as a field matures, which has been made by others [6]; this does not contradict our analysis and is mentioned in our paper’s discussion. Biomedical research efficiency is complex, and it is likely that the decline in scientific outputs has numerous causes. It is appropriate for scientists to consider any factors that may be contributing to this trend, and the comments from Dr. Schuck-Paim in this regard (see the other posted comments) are therefore welcome.
In summary, we do not find the arguments of Levine and Weinstein to be compelling. We note that other investigators have come to conclusions similar to ours [6, 15]. The productivity crisis in new drug development has been intensively discussed for at least a decade [6, 15-16]. We believe that addressing inefficiencies in biomedical research is essential to maintain public confidence in science and, by extension, public funding for basic research.
Arturo Casadevall and Anthony Bowen
[1] Health Aff (Millwood ) 2015, 34:286-293.
[2] PNAS 1996, 93:12725-12730.
[3] Am J Ther 2002, 9:543-555.
[4] Drug Discov Today 2015, 20:1182-1187.
[5] J Policy Anal Manage 2012, 31:641-660.
[6] Nat Rev Drug Discov 2012, 11:191-200.
[7] PNAS 2015, 112:14569-14574.
[8] Res Policy 2014, 43(1):21–31.
[9] Nature 2015, 521(7552):270–271.
[10] Br J Cancer 2014, 111(6):1021–1046.
[11] Nature 2015, 521(7552):274–276.
[12] J Psychosom Res 2015, 78(1):7–11.
[13] Nature 2012, 483(7391):531-533.
[14] Nat Rev Drug Discov 2011, 10(9):712.
[15] Nat Rev Drug Discov 2009, 8:959-968.
[16] Innovation policy and the economy 2006, 7:1-32.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.