- Oct 2021
-
www.pnas.org www.pnas.org
-
Chu, J. S. G., & Evans, J. A. (2021). Slowed canonical progress in large fields of science. Proceedings of the National Academy of Sciences, 118(41). https://doi.org/10.1073/pnas.2021636118
-
- Sep 2020
-
www.youtube.com www.youtube.com
-
JNOLive | Multiplicity of Randomized Clinical Trials for Coronavirus Disease 2019. (2020, July 14). https://www.youtube.com/watch?v=H-MWPqgLUvA&feature=youtu.be
-
- Apr 2016
-
deevybee.blogspot.com deevybee.blogspot.com
-
it could be argued that we don’t just need an elite: we need a reasonable number of institutions in which there is a strong research environment, where more senior researchers feel valued and their graduate students and postdocs are encouraged to aim high. Our best strategy for retaining international competitiveness might be by fostering those who are doing well but have potential to do even better
capacity requires top and middle.
-
-
Local file Local file
-
If incentives play an important role in theproduction of novel ideas, this heroic story might be atypical. In this article, we provide empiricalevidence that nuanced features of incentive schemes embodied in the design of research contractsexert a profound influence on the subsequent development of breakthrough ideas.
Thesis of article.
-
-
www.helga-nowotny.eu www.helga-nowotny.eu
-
excellencetorecognizeexcellence
Excellence to recognise excellence quotation.
-
-
we.vub.ac.be we.vub.ac.be
-
. I consider that my job, as a philosopher, is to activate the possible, and not to describe the probable, that is, to think situations with and through their unknowns when I can feel them
The job of a philosopher is to "activate the possible, not describe the probable."
-
- Mar 2016
-
download.springer.com download.springer.com
-
Pautasso, M. (2010). Worsening file-drawer problem in the abstracts of natural, medical and social sciencedatabases.Scientometrics, 85(1), 193–202
-
Schmidt, S. (2009). Shall we really do it again? The powerful concept of replication is neglected in thesocial sciences.Review of General Psychology, 13(2), 90–100.
-
Jeng, M. (2006). A selected history of expectation bias in physics.American Journal of Physics, 74(7),578–583
History of expectation bias in physics
-
Song, F., Parekh, S., Hooper, L., Loke, Y. K., Ryder, J., Sutton, A. J., et al. (2010). Dissemination andpublication of research findings: An updated review of related biases.Health Technology Assessment,14(8), 1–193. doi
positive bias
-
De Rond, M., & Miller, A. N. (2005). Publish or perish—Bane or boon of academic life?Journal ofManagement Inquiry, 14(4), 321–329. doi:
On how increased pressure to publish diminishes creativity.
-
Several possible problems have been hypothesised, including: undue proliferation ofpublications and atomization of results (Gad-el-Hak2004; Statzner and Resh2010);impoverishment of research creativity, favouring ‘‘normal’’ science and predictable out-comes at the expense of pioneering, high-risk studies (De Rond and Miller2005); growingjournal rejection rates and bias against negative and non-significant results (because theyattract fewer readers and citations) (Statzner and Resh2010; Lortie1999); sensationalism,inflation and over-interpretation of results (Lortie1999; Atkin2002; Ioannidis2008b);increased prevalence of research bias and misconduct (Qiu2010). Indirect empiricalevidence supports at least some of these concerns. The per-capita paper output of scientistshas increased, whilst their career duration has decreased over the last 35 years in thephysical sciences (Fronczak et al.2007). Rejection rates of papers have increased in thehigh-tier journals (Larsen and von Ins2010; Lawrence2003). Negative sentences such as‘‘non-significant difference’’ have decreased in frequency in papers’ abstracts, while catchyexpressions such as ‘‘paradigm shift’’ have increased in the titles (Pautasso2010; Atkin2002). No study, however, has yet verified directly whether the scientific literature isenduring actual changes in conten
Good discussion (and bibliography) of problems involved in hyper competition
-
Formann, A. K. (2008). Estimating the proportion of studies missing for meta-analysis due to publicationbias.Contemporary Clinical Trials, 29(5), 732–739. doi
estimate of positive bias in clinical trials.
-
Fronczak, P., Fronczak, A., & Holyst, J. A. (2007). Analysis of scientific productivity using maximumentropy principle and fluctuation-dissipation theorem.Physical Review E, 75(2), 026103. doi:10.1103/PhysRevE.75.026103.
On rising scientific productivity over shorter careers.
-
Atkin, P. A. (2002). A paradigm shift in the medical literature.British Medical Journal, 325(7378),1450–1451
On the rise of sexy terms like "paradigm shift" in abstracts.
-
Bonitz, M., & Scharnhorst, A. (2001). Competition in science and the Matthew core journals.Sciento-metrics, 51(1), 37–54
Matthew effect
-
-
download.springer.com download.springer.com
-
The results presented here suggest that competition among researchers haspronounced effects on the way science is done. It affects the progress of sciencethrough secrecy and sabotage and interferes with peer review and other universal-istic merit-review systems. It twists relationships within a field and can increase thelikelihood of a scientist engaging in misconduct. None of the focus-groupparticipants made reference to positive effects of competition on their work,despite the fact that the focus-group questions dealt in a general way with scientists’work and the norms of conduct that govern that work. If the protocol questions hadasked explicitly about competition, doubtless there would have been somediscussion about the positive aspects of science. In the context of the generalquestions, though, the scientists referred to competition as a constant and negativeforce that interferes with the way science is done. It is disconcerting to ponder theconsequences of competition, such as mistrust and defensive posturing, for acommunity that has long been committed—in principle—to shared ideas andcollegiality.
Conclusions on the negative results of competition
-
Competition is also manifested in scientists’ pressured haste, leading tocarelessness, which can verge on questionable behavior. One discussant talkedabout scientists ‘‘cutting a little corner’’ in order to get a paper out before others orto get a larger grant, and another said that she once published a result that she gotthree times in one week but could not replicate the following week
How competitiveness also results in error.
-
But there’s, I think there is a question of how you interpret the data, even ... ifthe experiments are very well designed. And, in terms of advice—not that I’mgoing to say that it’s shocking—but one of my mentors, whom I very muchrespect as a scientist—I think he’s extraordinarily good—advised me to alwaysput the most positive spin you can on your data. And if you try to present, like,present your data objectively, like in a job seminar, you’re guaranteed tonotgetthe job
Importance of "spinning" data
-
You are. And you know what the problems are in doing the experiments. And ifyou, in your mind, think that there should be one more control—because youknow this stuff better than anybody else because you’re doing it, you know—you decided not to do that, not to bring up what the potential difficulties are, youhave a better chance of getting that paper published. But it’s—I don’t think it’sthe right thing to do.
deliberate positive bias
-
dishonesty occursmore with the postdoc. Because they want to get the data—whereas if they don’tpublish, they don’t move on. And they, I think, are more likely to sort of fudge alittle bit here and there if they need to get the data done. Unless, like you say, youwatch them.
senior faculty over-estimate the likelihood of juniors committing misconduct.
-
One mid-career scientist told a story of how he and others in his lab counteractedan abuse of power by his mentor, a senior scientist, while he was in training. Hismentor received a manuscript to review that was authored by a ‘‘quasi-competitor.’’It presented results of experiments similar to those that were going on in thementor’s lab. The scientist continued, ‘‘That paper ... basically would have beat usto the punch. They would have published these results before us, and they wouldhave gotten credit, and not us. And my mentor, God bless him, sat on the paper.’’The mentor not only delayed writing the review but asked someone working in thelab to write it (a move of questionable ethicality in itself). That lab person and ourrespondent decided, in response, to stall their own work, so that their lab would nothave an unfair advantage over the group who submitted the paper for review. In theend, the original group got credit for the findings, while the respondent’s lab wasalso able to publish their slightly different findings. He ended his story with,‘‘Sometimes you’re in an awkward position, and you try to do the best thing you canunder the circumstances, within your own internal ethical clock or whatever. Andsometimes it’s ugly and it’s imperfect, but it’s the only thing you can do. If we hadgone to the mentor and voiced this objection, our careers would have been over. Ifwe had approached the journal—God forbid, forget it.’’ The speaker qualified thisstory by saying that it made him sound much more ethical than he actually is.
peer review deliberately delayed in order to slow competitor
-
The focus-group discussions showed, however, that scientists see peer review asaffording a unique, even protected opportunity for competitors to take advantage ofthem. In this sense, competition infects the peer review process, not only throughscientists’ competition with other applicants, but also through scientists’ distrust ofthe reviewers themselves, as competitors. The following exchange among mid-career discussants shows their sense of vulnerability
Evidence that peer-reviewers are competitors.
-
By contrast, others use it in thepublication process solely to maintain their competitive bid for priority in a line ofresearch inquiry, as a way to sabotage others’ progress. A scientist in an early-career group acknowledged the need to make results reproducible by telling people‘‘the whole recipe of the whole method’’ if asked directly, but then she talkedabout the ‘‘little trick’’ of not including all the details in a publication orpresentation. Like the scientists in a different group quoted above, she mentionedothers’ practice of taking photographs of poster presentations in order then topublish the results first. She said that people, in defense, ‘‘omit tiny little details’’:‘‘But sometimes in the publication, people, just to protect themselves, will not giveall the details. It’s always right, but maybe it’s not totally complete—to protectthemselves. Because your ideas get stolen constantly, and it’s so competitive ifyou’re a small lab.
sabotaging reproducibility
-
A more deliberate form of not sharing is the omission of critical details inpresentations, papers and grant proposals so that others will have difficultyreplicating and extending one’s own research.
Sabotaging reproducibility.
-
I know a large number of people in that category, in my own experience, who ...opted out because they didn’t want to play. They didn’t want to play the kind ofgames that have to be played to be successful, and in bringing in money and gettingthe papers out. There’s so much more than just doing good science that comes intoit. There’s so much communication and there’s salesmanship that has to go on
On the negative impact competition has on career choice
-
I really hate to admit this, but you do the same thing with your competitors asyou do with grant agencies. You sucker-punch them. You might have—when Isubmit a paper, I already have the next two or three papers’ worth of data. Imean, I know what I’m going to be publishing a year from now, mostly. But thepaper that comes out of my lab is Part A. Parts B and C are mostly on my desk.And I’ve put things in part A to basically entice my competitors into making anass out of themselves, or to second guess, or say, ‘‘Oh that must be wrongbecause of that, or something.’
Gaming referees.
-
You submit the first grant, youpropose the novel thing. You know damn well any study section that’s evenmildly conservative is going give you, ‘‘Well, it sounds promising.’’ Theymight give you a good score, you hope for a good score, but it’s not going toget funded, because it’s too novel, it’s too risky, it’s too blah blah. But youalready have the damn data. You know on the second resubmit, you’re goingto say, ‘‘Good point! We took that to heart. Oh, what a wonderful suggestion!We will worry about this too. Guess what? Here’s the data!’’ Shove it downtheir throat. And then it’s funded. Because, wow, you flagged them, yousucker-punched them. They said, ‘‘This is really novel, blah, blah. Boy if youcould only do that, that would be a great grant.’’ Well, you alreadydiddo it,and that’s the point. And you basically sucker-punch the study section intogiving you the money by default. They have to at that point. They don’t havea choice.
On the need to have results before funding is given.
-
Competition for funding, publications, scientific priority and overall career successleads scientists to describe their work in terms of strategies one might use in a game.Focus-group participants revealed that, like it or not, working within the scientificcommunity requires artful maneuvering and strategizing.
relationship of competition to explicit game-playing.
-
In addition to that, the other thing that they focus on is science as celebrity.... Sothe standards are, ‘‘How much did it cost, and is it in the news?’’ And if it didn’tcost much and if it is not in the news, but it got a lot of behind-the-scenes talkwithin your discipline, they don’t know that, nor do they care
Importance of news-worthiness.
-
You’ve got to have a billionpublications in my field. That is the bottom line. That’s the only thing that counts.You can fail to do everything else as long as you have lots and lots of papers
Importance of publications in science--overrules everything else.
-
In short, there are many people (the oversupply factor) competing for prestigious,desirable and scarce rewards and resources (the funding factor), in a struggle thatbestows those rewards disproportionately on those of marginally greater achieve-ment (the tournament factor). This situation is supported to the detriment of that‘‘legion of the discontented’’ and to the benefit of senior investigators, because it‘‘generates good research by employing idealistic young graduate students andpostdoctoral fellows at low cost’’ [26]. In other words, the benefits accrue to fundingand employing institutions. This paper explores some of the costs that accompanythese benefits
Economic structure of competition at universities.
-
Richard B. Freeman and colleagues [28] havecharacterized the problem as follows: ‘‘Research in the biosciences fits a tournamenteconomic structure. A tournament offers participants the chance of winning a bigprize—an independent research career, tenure, a named chair, scientific renown,awards—through competition.... It fosters intense competition by amplifying smalldifferences in productivity into large differences in recognition and reward. Well-structured tournaments stimulate competition. Because the differences in rewardsexceed the differences in output, there is a disproportionate incentive to ‘win’’’(p.2293). Research environments in which only small numbers of scientists have theopportunity to gain significant attention increase the competitive stakes: playing thegame may be a gamble, but the payoff for winning is significant [28,36]
The tournament structure of biosciences.
-
Another perspective sees competition as a function not just of funding, but of thebalance between supply and demand of resources, particularly human resources. Inthe current competitive system, young scientists are pitted against one another forattractive career opportunities that are becoming increasingly scarce [28].Researchers, feeling the pressure to be first to present findings in their fields,employ armies of graduate students and postdoctoral fellows and strive to maketheir laboratory groups the smartest and the fastest. The result is a ‘‘postdocbottleneck’’ [29] where the supply for highly educated and trained researchers farexceeds the demand [30–33]. In concrete terms, Donald Kennedy and colleagues[34] have described the structural problem as a source of excess supply of humancapital: ‘‘We’ve arranged to produce more knowledge workers than we can employ,creating a labor-excess economy that keeps labor costs down and productivity high’’(p. 1105). The system produces, they claim, a ‘‘legion of the discontented’’ [34].They argue that institutional and policy decisions about training scientists should becoupled to placement histories of recent graduates, numbers of intellectual offspringof faculty, and job markets for scientists. Roger L. Geiger [35] has suggested thatthe imbalance between supply and demand is due in part to deficiencies in graduate
Role of lack of positions. Interestingly, this has been shown by Fang et al to be not reflected in misconduct stats: i.e. the vast majority of scientific fraud is conducted by senior (male) scientists, not job hungry post-docs or grad students.
-
Analysts differ as to the reasons why competition has intensified. Some see thesituation in terms of money. Tempering the effects of competition is not a primeimpetus behind calls by the National Science Board [26] and by a recent coalition of140 college presidents and other leaders [27] for more federal funding for scientificresearch; however, some scientists see such advocacy movements in terms of easingcertain aspects of competition that are worsened by tight dollars. More money, morepositions, and overall expansion of the research enterprise would improve thesituation
role of funding
-
here are indications, however, that the natureof competition has changed in recent years. Goodstein [25] argues that this shift islinked to negative outcomes:Throughout most of its history, science was constrained only by the limits ofits participants’ imagination and creativity. In the past few decades, however,that state of affairs has changed dramatically. Science is now held back mainlyby the number of research posts and the amount of research funds available.What had been a purely intellectual competition has become an intensestruggle for scarce resources. In the long run, this change, which is permanentand irreversible, will probably have an undesirable effect on ethical behavioramong scientists. Instances of scientific fraud will almost surely become morecommon, as will other forms of scientific misconduct (p. 31)
relationship of negative aspects of competition to change in funding model that promotes scarcity. See Goodstein, D. (2002). Scientific misconduct.Academe, 88, 28–31
-
It is negatively correlated with subscription tonormative systems (either traditional or alternative) and sense of community
Scientific competition is is negatively correlated to eithical systems and sense of community.
-
Melissa S.Anderson [20] furthermore found that a competitive departmental environment inscience is positively correlated with exposure to misconduct, fears of retaliation forwhistle-blowing, and conflic
More evidence of correlation between competition and "exposure to misconduct".
-
Empirical findings show a strong, positive relationship between the level ofperceived competition in an academic department and the likelihood that depart-mental members will observe misconduct among their colleagues [19]
Higher the level of perceived competition in academic departments, the greater the liklihood that people will see misconduct among peers.
-
David Blumenthal and colleagues [17] found that university geneticists and otherlife scientists who perceive higher levels of competition in their fields are morelikely to withhold data or results. Such withholding took the form of omittinginformation from a manuscript or delaying publication to protect one’s scientificlead, maintaining trade secrets, or delaying publication to protect commercial valueor meet a sponsor’s requirements. John P. Walsh and Wei Hong [18] have reportedsimilar finding
Evidence that competition causes scientists to withhold results and/or data
-
ompetition in science has its bright side, which past analysts and commentatorstended to emphasize and current writers often affirm. It has been credited withensuring that ideas, work, proposals and qualifications of all interested parties areevaluated prior to the distribution of rewards, particularly funding and positions.From this perspective, competition promotes open examination and fair judgment.The norm of universalism [11] is supported when all qualified people have theopportunity to propose and defend their ideas and work in open competition [12].After all, absent competition, cronyism is likely to flouris
Positive value of competition in science.
-
Increases in levels of competition in science are symptomatic of a moregeneral hypercompetitive shift in organizations
The general rise to hypercompetitiveness. (source: https://hypothes.is/a/AVO5uuxxH9ZO4OKSlamG)
-
Because science is a cumulative, interconnected, andcompetitive enterprise, with tensions among the various societies in which researchis conducted, now more than ever researchers must balance collaboration andcollegiality with competition and secrecy
Institute of medicine's call to balance cooperativeness vs. collaboration.
-
Bok, D. (2003).Universities in the marketplace: The commercialization of higher education.Princeton: Princeton University Press
Sources of competition among universities
-
Theirdiscussions suggest clearly that the downside of competition has been underesti-mated and that it may have more prominent effects now than in past years. Asreputation, respect and prestige are increasingly connected to resources and tosuccess in the competitions that distribute those resources, scientists find more oftheir work and careers caught up in competitive arenas. The six categories ofcompetition’s effects that emerged in our analyses suggest reason for concern aboutthe systemic incentives of the U.S. scientific enterprise and their implications forscientific integrity.
Implications of competition for scientific integrity.
-
scienceWhen the actor Michael J. Fox was in the initial stages of creating his foundation forresearch on Parkinson’s Disease, he came to recognize the negative impact thatcompetition among scientific groups has on the overall progress of research on thedisease. The director of one group actually said to him, ‘‘Well, if you don’t help us,then, at least, don’t help them’’ [1, p. 236]. Such was his introduction to thecompetitive world of U.S. science.
Anecdote about how Michael J. Fox discovered scientific competition when he set up his foundation for Parkinson's disease.
Tags
- popular dissemination
- universities
- results decay
- scientific competition
- reproducibility
- hypercompetitiveness
- competition
- leaky pipe
- scientific funding
- game-playing
- tournament economic structure
- excellence
- michael j. fox
- economics
- bioscience
- error
- parkinson's disease
- replicability
- anecdotes
- positive bias
- scientific and scholarly communication
- gender
- newsworthiness
- scarcity
- strategy
- scientific misconduct
- scientific integrity
- dissemination
- peer review
Annotators
URL
-