- Mar 2016
-
download.springer.com download.springer.com
-
David Blumenthal and colleagues [17] found that university geneticists and otherlife scientists who perceive higher levels of competition in their fields are morelikely to withhold data or results. Such withholding took the form of omittinginformation from a manuscript or delaying publication to protect one’s scientificlead, maintaining trade secrets, or delaying publication to protect commercial valueor meet a sponsor’s requirements. John P. Walsh and Wei Hong [18] have reportedsimilar finding
Evidence that competition causes scientists to withhold results and/or data
-
ompetition in science has its bright side, which past analysts and commentatorstended to emphasize and current writers often affirm. It has been credited withensuring that ideas, work, proposals and qualifications of all interested parties areevaluated prior to the distribution of rewards, particularly funding and positions.From this perspective, competition promotes open examination and fair judgment.The norm of universalism [11] is supported when all qualified people have theopportunity to propose and defend their ideas and work in open competition [12].After all, absent competition, cronyism is likely to flouris
Positive value of competition in science.
-
Increases in levels of competition in science are symptomatic of a moregeneral hypercompetitive shift in organizations
The general rise to hypercompetitiveness. (source: https://hypothes.is/a/AVO5uuxxH9ZO4OKSlamG)
-
Thomas, L. G. III (1996). The two faces of competition: Dynamic resourcefulness and the hyper-competitive shift.Organization Science, 7, 221–242
Source for https://hypothes.is/a/AVO5unFjH9ZO4OKSlamC
-
Because science is a cumulative, interconnected, andcompetitive enterprise, with tensions among the various societies in which researchis conducted, now more than ever researchers must balance collaboration andcollegiality with competition and secrecy
Institute of medicine's call to balance cooperativeness vs. collaboration.
-
The scientific enterprise is characterized by competition for priority, influence,prestige, faculty positions, funding, publications, and students
Bok on how universities compete.
-
Bok, D. (2003).Universities in the marketplace: The commercialization of higher education.Princeton: Princeton University Press
Sources of competition among universities
-
Pfeffer, J. (1992).Managing with power: Politics and influence in organizations. Boston: HarvardBusiness School Press
On competition.
-
Theirdiscussions suggest clearly that the downside of competition has been underesti-mated and that it may have more prominent effects now than in past years. Asreputation, respect and prestige are increasingly connected to resources and tosuccess in the competitions that distribute those resources, scientists find more oftheir work and careers caught up in competitive arenas. The six categories ofcompetition’s effects that emerged in our analyses suggest reason for concern aboutthe systemic incentives of the U.S. scientific enterprise and their implications forscientific integrity.
Implications of competition for scientific integrity.
-
scienceWhen the actor Michael J. Fox was in the initial stages of creating his foundation forresearch on Parkinson’s Disease, he came to recognize the negative impact thatcompetition among scientific groups has on the overall progress of research on thedisease. The director of one group actually said to him, ‘‘Well, if you don’t help us,then, at least, don’t help them’’ [1, p. 236]. Such was his introduction to thecompetitive world of U.S. science.
Anecdote about how Michael J. Fox discovered scientific competition when he set up his foundation for Parkinson's disease.
-
-
www.nature.com www.nature.com
-
The winner-take-all aspect of the priority rule has its drawbacks, however. It can encourage secrecy, sloppy practices, dishonesty and an excessive emphasis on surrogate measures of scientific quality, such as publication in high-impact journals. The editors of the journal Nature have recently exhorted scientists to take greater care in their work, citing poor reproducibility of published findings, errors in figures, improper controls, incomplete descriptions of methods and unsuitable statistical analyses as evidence of increasing sloppiness. (Scientific American is part of Nature Publishing Group.)As competition over reduced funding has increased markedly, these disadvantages of the priority rule may have begun to outweigh its benefits. Success rates for scientists applying for National Institutes of Health funding have recently reached an all-time low. As a result, we have seen a steep rise in unhealthy competition among scientists, accompanied by a dramatic proliferation in the number of scientific publications retracted because of fraud or error. Recent scandals in science are reminiscent of the doping problems in sports, in which disproportionately rich rewards going to winners has fostered cheating.
How the priority rule is killing science.
-
-
mbio.asm.org mbio.asm.org
-
The role of external influences on the scientific enterprise must not be ignored. With funding success rates at historically low levels, scientists are under enormous pressure to produce high-impact publications and obtain research grants. The importance of these influences is reflected in the burgeoning literature on research misconduct, including surveys that suggest that approximately 2% of scientists admit to having fabricated, falsified, or inappropriately modified results at least once (24). A substantial proportion of instances of faculty misconduct involve misrepresentation of data in publications (61%) and grant applications (72%); only 3% of faculty misconduct involved neither publications nor grant applications.
Importance of low funding rates as incitement to fraud
-
The predominant economic system in science is “winner-take-all” (17, 18). Such a reward system has the benefit of promoting competition and the open communication of new discoveries but has many perverse effects on the scientific enterprise (19). The scientific misconduct among both male and female scientists observed in this study may well reflect a darker side of competition in science. That said, the preponderance of males committing research misconduct raises a number of interesting questions. The overrepresentation of males among scientists committing misconduct is evident, even against the backdrop of male overrepresentation among scientists, a disparity more pronounced at the highest academic ranks, a parallel with the so-called “leaky pipeline.” There are multiple factors contributing to the latter, and considerable attention has been paid to factors such as the unique challenges facing young female scientists balancing personal and career interests (20), as well as bias in hiring decisions by senior scientists, who are mostly male (21). It is quite possible that, in at least some cases, misconduct at high levels may contribute to attrition of woman from the senior ranks of academic researchers.
Reason for fraud: winner take all
-
-
www.ncbi.nlm.nih.gov www.ncbi.nlm.nih.gov
-
Editors, Publishers, Impact Factors, and Reprint Income
On the incentives for journal editors to publish papers they think might improve IF... and how citations are gamed.
-
-
docs.google.com docs.google.com
-
How to annotate PDFs in Google Drive
- Download the file
- Open it in a browser with hyopthes.is on
- Apparently the annotations will be visible to others even though the file is local.
-
-
sites.fas.harvard.edu sites.fas.harvard.edu
-
Of thee and of the white lylye flour
thjis is a comment
-
O Lord, oure Lord, thy name how merveillous
The opening line.
-
- Feb 2016
-
books.google.de books.google.de
-
p. 95 "For nearly all of recorded history, we human beings have lived our lives isolated inside tiny cocoons of information. The most brilliant and knowledgeable of our ancestors often had direct access to only a tiny fraction of human knowledge. Then in the 1990s and 2000s, over a period of just two decades, our direct access to knowledge expanded perhaps a thousandfold. At the same time, a second, even more important expansion has been going on: an expansion in our ability to find meaning in our collective knowledge."
-
p. 60 "Citation is perhaps the most powerful technique for building an information commons that could be created with seventeenth-century technology."
-
Nielsen, Michael A. 2012. Reinventing Discovery: The New Era of Networked Science. Princeton, N.J: Princeton University Press.
-
-
www.theguardian.com www.theguardian.com
-
It is important to have elite scientific research, but we should not pretend that the interests of scientists perfectly overlap with the public interest. By presuming that innovation is all about pace rather than direction, publicly-funded science risks following rather than counterbalancing private sector interests. The structure of what Michael Polanyi called the ‘republic of science’ makes it easy for scientists to offload responsibility. Polanyi’s science is self-organising and devoted to the pure pursuit of knowledge. As philosopher Heather Douglas describes it, the general responsibilities of scientists to society are trumped by their role responsibilities towards their disciplines. While Polanyi would not argue for irresponsibility, he would have been relaxed about scientists being divorced from social responsibilities. Polanyi had this response to those who would direct science: ‘You can kill or mutilate the advance of science, you cannot shape it’.
Argument is really that science can't be left ungoverned.
-
We need the ERC and other blue-skies funding as part of the innovation ecosystem. The danger is that, by identifying this as ‘excellence’, it damns everything else, including applied science, user-driven innovation, open science, meta-analysis, regulatory science, social innovation and engagement with policymakers and the public, to mediocrity. In the last thirty years, our sense of ‘excellence’ has narrowed, not broadened
Emphasis on blueskies research damns other research to the status of mediocrity.
-
journal rankings discourage interdisciplinarity by systematically evaluating disciplinary research more highly
Impact rankings discourage multidisciplinary research by ranking disciplinary research more highly.
-
‘Excellence’ is an old-fashioned word appealing to an old-fashioned ideal. ‘Excellence’ tells us nothing about how important the science is and everything about who decides. It is code for decision-making based on the autonomy of scientists. Excellence is judged by peers and backed up by numbers such as h-indexes and journal impact factors, all of which reinforces disciplinary boundaries and focuses scientists’ attention inwards rather than on the problems of the outside world.
Excellence is a community defining term (insiders vs outsiders). It shows membership and success in navigating disciplinary norms.
See Lamont, Michèle. 2009. How Professors Think: Inside the Curious World of Academic Judgment. Cambridge, Mass: Harvard University Press.
Merton, Robert K. 1972. “Insiders and Outsiders: A Chapter in the Sociology of Knowledge.” American Journal of Sociology 78 (1): 9–47.
-
Space missions are about technological problems with technological solutions. It is normally clear whether or not they have succeeded. There is far more disagreement about causes and cures for ‘wicked’ problems of poverty or climate change. Science alone cannot give us the answer.
Space is a technological problem and it is clear when it has succeeded (metrics). Poverty is a multi-disciplinary about which there is room for nuance.
-
the problems of space and the problems of poverty are qualitatively different, demanding very different approaches
poverty and space travel are fundamentally different types of problems.
-
-
www.sciencedirect.com.ezproxy.alu.talonline.ca www.sciencedirect.com.ezproxy.alu.talonline.caQ & A1
-
Another important, related but distinct function is that general journals act as a filter: ideally, we publish the ‘best’ papers, reporting the stories most likely to be a wide interest, and those with the greatest claim to coverage in the general media (newspapers, television and so on). A hierarchy of journals, with the general ones at the ‘top’, helps journalists to find reports of the most significant developments – those that are of most interest, and also (importantly) ‘sound’, in the sense of having passed rigorous peer review. In this sense, general journals offer a link between the specialist scientific literature and the general media and public.
The function of "top" general science journals.
-
-
www.eigenfactor.org www.eigenfactor.org
-
Important for problems with career peer review.
-
-
www.hrc.govt.nz www.hrc.govt.nz
-
The selection of successful proposals is not the same as that for other HRC contracts. All proposals that meet the eligibility criteria will be assessed for compatibility with the scheme’s intent; proposals won’t be scored or ranked. All proposals that are considered eligible and compatible will be considered equally eligible to receive funding, and a random process will be used to select approximately three proposals to be offered funding. A full description of the assessment process to determine eligibility, compatibility and which applications will receive funding can be found in Appendix 1 of the Guidelines document.
Health Research Council of New Zealand uses lottery to fund acceptable grants.
-
-
journals.cambridge.org journals.cambridge.orgtitle2
-
This is the article where the authors resubmitted published papers to see how they would do. For criticism of method, see Weller 2001 p. 312.
-
Peters, Douglas P., and Stephen J. Ceci. 1982. “Peer-Review Practices of Psychological Journals: The Fate of Published Articles, Submitted Again.” Behavioral and Brain Sciences 5 (02): 187–95. doi:10.1017/S0140525X00011183.
-
-
elifesciences.org elifesciences.org
-
These observations have important implications for the grant peer review system. If reviewers are unable to reliably predict which meritorious applications are most likely to be productive, then reviewers might save time and resources by simply identifying the top 20% and awarding funding within this group on a random basis or according to programmatic priorities. In this regard, we refer to our recent suggestion that the NIH consider a modified lottery system (Fang and Casadevall, 2014) and note that the New Zealand Health Research Council has already moved to a lottery system to select proposals for funding in its Explorer Grants program (Health Research Council of New Zealand, 2015).
NZ health system has gone to a lottery system.
-
17% (334 of 1987) of grants with a percentile score of zero failed to produce any citations.
even top grants often fail to produce citations.
-
In contrast, a recent analysis of over 130,000 grant applications funded by the NIH between 1980 and 2008 concluded that better percentile scores consistently correlate with greater productivity (Li and Agha, 2015). Although the limitations of using retrospective publication/citation productivity to validate peer review are acknowledged (Lindner et al., 2015; Lauer and Nakamura, 2015), this large study has been interpreted as vindicating grant peer review (Mervis, 2015; Williams, 2015). However, the relevance of those findings for the current situation is questionable since the analysis included many funded grants with poor percentile scores (>40th percentile) that would not be considered competitive today. Moreover, this study did not examine the important question of whether percentile scores can accurately stratify meritorious applications to identify those most likely to be productive.We therefore performed a re-analysis of the same dataset to specifically address this question. Our analysis focused on subset of grants in the earlier study (Li and Agha, 2015) that were awarded a percentile score of 20 or better: this subset contained 102,740 grants. This percentile range is most relevant because NIH paylines (that is, the lowest percentile score that is funded) seldom exceed the 20th percentile and have hovered around the 10th percentile for some institutes in recent years.
Is a reanalysis of data in Li and Agha 2015 concetrating on funding decisions "above the payline" (i.e. above 20th percentile.
Li, D., and L. Agha. 2015. “Big Names or Big Ideas: Do Peer-Review Panels Select the Best Science Proposals?” Science 348 (6233): 434–38. doi:10.1126/science.aaa0185.
-
Most funding agencies employ panels in which experts review proposals and assign scores to them based on a number of factors (such as expected impact and scientific quality). However, several studies have suggested significant problems with the current system of grant peer review. One problem is that the number of reviewers is typically inadequate to provide statistical precision (Kaplan et al., 2008). Researchers have also found considerable variation among scores and disagreement regarding review criteria (Mayo et al., 2006; Graves et al., 2011; Abdoul et al., 2012), and a Bayesian hierarchical statistical model of 18,959 applications to the NIH found evidence of reviewer bias that influenced as much as a quarter of funding decisions (Johnson, 2008).Although there is general agreement that peer review can discriminate sound grant applications from those containing serious flaws, it is uncertain whether peer review can accurately predict those meritorious applications that are most likely to be productive. An analysis of over 400 competing renewal grant applications at one NIH institute (the National Institute of General Medical Sciences) found no correlation between percentile score and publication productivity of funded grants (Berg, 2013). A subsequent study of 1492 grants at another NIH institute (the National Heart, Lung and Blood Institute) similarly found no correlation between the percentile score and publication or citation productivity, even after correction for numerous variables (Danthi et al., 2014). These observations suggest that once grant applications have been determined to be meritorious, expert reviewers cannot accurately predict their productivity.
Peer review does little more than distinguish flawed from not flawed; after that, it is a poor predictor of success.
-
-
-
The mechanism used by the National Institutes of Health (NIH) to allocate government research funds to scientists whose grants receive its top scores works essentially no better than distributing those dollars at random, new research suggests
Peer review at NIH no better than chance.
-
-
0-www.nature.com.darius.uleth.ca 0-www.nature.com.darius.uleth.ca
-
In the late 1980s, the United Kingdom became the first country to systematically evaluate the quality of its university research. The REF is the latest incarnation of these check-ups. Previously known as the Research Assessment Exercise (RAE), the evaluations are widely credited with helping to improve the country's research system. Between 2006 and 2010, citations of UK articles grew by 7.2%, faster than the world average of 6.3%; and the country's share of citations grew by 0.9% per year, according to a 2011 analysis conducted by publishing company Elsevier for the government.
UK citation share grew by <1%/year despite the REF.
-
At Cardiff University, around ten academics were pressured to switch to teaching-focused contracts after they scored poorly on a practice exercise, so as not to drag down their department, says Peter Guest, an archaeologist at Cardiff and the university's UCU liaison on the REF. This form of game-playing is discouraged, but not expressly forbidden, by the REF — however, making career decisions solely on the basis of the evaluation is against the university's own policies, as well as those of many other institutions, says Guest.
Evidence of Game Playing encouraged by the REF.
-
-
asr.sagepub.com asr.sagepub.com
-
Guetzkow, Joshua, Michèle Lamont, and Grégoire Mallard. 2004. “What Is Originality in the Humanities and the Social Sciences?” American Sociological Review 69 (2): 190–212. doi:10.1177/000312240406900203.
-
Whereas the literature tends to equate originality with substantiveinnovation and to consider the personal attributes of the researcher as irrelevant to theevaluation process, we show that panelists often view the originality of a proposal as anindication of the researcher’s moral character, especially of his/her authenticity andintegrity
Panelists often see originality (in SSH) as issue of moral character (!)
-
-
www.mla.org www.mla.org
-
And as Jonathan Culler pointed out in Framing the Sign: Criticism and Its Institutions, the external review process enabled the rise of many forms of innovative and even controversial work in the humanities
External peer review allows innovative work in Humanities.
-
As Christopher Jencks and David Reis-man argued in The Academic Revolution, the development of external peer review freed individual scholars from the vertical—and sometimes pa-rochial and territorial—evaluation of their work by local college deans andupper-level administrators
External peer review for STP stops "vertical" evaluation.
-
-
0-search.proquest.com.darius.uleth.ca 0-search.proquest.com.darius.uleth.ca
-
Weiser, Irwin. 2012. “Peer Review in the Tenure and Promotion Process.” College Composition and Communication 63 (4): 645–72.
Looks at Peer review in promotions and STP.
-
-
asr.sagepub.com asr.sagepub.com
-
The document is the fundamental unit of analysis in most large-scale quantitative studies of scientific behavior
Document is fundamental unit of scientific analysis.
-
The evidence and theory outlined here suggest that most published findings in well-developed fields should be expected and unsurprising.11 Such findings fit with tradition: scientists with the appropriate habitus are disposed to generate and acknowledge them as valid science. By contrast, unexpected findings should rarely reach publication. Tradition will be reliably but modestly rewarded with citations, whereas subversive innovation, if published, should display a higher average and variance in acclaim.12 Finally, unexpected findings should be over-represented in the work of high-achieving scientists. Scientific capital is disproportionately awarded for work that alters the scope of accepted knowledge.
- Most published findings should be expected and unsurprising
- Unexpected findings should be rare to reach publication
- (1) should result in modest citations
- (2) should result in higher average and variance and in acclaim in citations
- Unexpected findings should be over-represented in the work of high achieving scientists as captial is disproportionately awarded to disruptive work.
-
Kuhn ([1959] 1977) introduced the notion of the essential tension at a conference motivated by Cold War concerns about declining originality, innovation, and scientific competitiveness in the United States (a persistent concern; see Cowen 2011). The conveners, all psychologists, had framed the conference around a dichotomy between convergent and divergent styles of thinking. Convergent thinking was conservative, oriented toward consensus and shared patterns of thought (Kuhn [1959] 1977). Divergent thinking, by contrast, was radical, characterized by “flexibility and open-mindedness” (Kuhn [1959] 1977:226). According to most of the conference speakers, divergent thought was essential for scientific progress, yet it was being stifled by the U.S. educational system.
Nice background to Kuhn's essential tension: cold war conference;l psychologist convenors established valorisation of "convergent" and "divergent" thinking that saw "divergent" as good and essential for science. Kuhn challenged that claim argued that both were crucial.
-
When following a conservative strategy and adhering to a research tradition in their domain, scientists achieve publication with high probability: they remain visibly productive, but forgo opportunities for originality. When following a risk-taking strategy, scientists fail more frequently: they may appear unproductive for long periods, like the seven years Andrew Wiles spent proving Fermat’s Last Theorem or the decade Frederick Sanger invested in developing the “Sanger method” of DNA sequencing.4 If a risky project succeeds, however, it may have a profound impact, generating substantial new knowledge and winning broad acclaim (Kuhn 1962). This strategic tension is repeatedly articulated as a dichotomy: in the sociology of science, as reliable “succession” versus risky “subversion” (Bourdieu 1975) or “relevance” versus “originality” (Whitley 2000); in the philosophy of science, as “conformity” versus “dissent” or “discipline” versus “rebellion” (Polanyi 1969); and in the study of innovation, as “exploitation” versus “exploration” (March 1991).5 Recent theoretical work supports this broad picture by highlighting the distinctive contributions (Weisberg and Muldoon 2009) and rewards (Kleinberg and Oren 2011) associated with traditional versus innovative strategies.
Bibliographic discussion of the essential tension.
-
To remain in the research game requires productivity. Scientists typically achieve this by incremental contributions to established research directions. This may yield enough recognition to maintain a (relatively low) position. Achieving high status, by contrast, requires original and transformative contributions, often obtained by pursuing risky new directions (Merton 1957).
Kuhn's essential tension turned into career choice: productivity requires incremental contributions; high status requires risky innovation.
-
Article is full of very interesting references to industrialisation, rewards, economics of scientific choice.
-
In this article, we examine scientific choice quantitatively and at scale, using published claims in contemporary biomedicine to make inferences about underlying choices and dispositions
Quantitative approach to science choice.
-
In an expanding universe of possible research questions, a topic that attracts intense investigation is separated from neglected or abandoned topics by more than just the contours of nature. Scientists’ choices matter: in aggregate, patterned choices give scientific knowledge its shape and guide its future evolution
Science culture and incentives shape science discoveries.
-
By studying prizewinners in biomedicine and chemistry, we show that occasional gambles for extraordinary impact are a compelling explanation for observed levels of risky innovation. Our analysis of the essential tension identifies institutional forces that sustain tradition and suggests policy interventions to foster innovation.
Studies prize winners in biomedicine and chemistry.
-
Foster, Jacob G., Andrey Rzhetsky, and James A. Evans. 2015. “Tradition and Innovation in Scientists’ Research Strategies.” American Sociological Review 80 (5): 875–908. doi:10.1177/0003122415601618.
-
An innovative publication is more likely to achieve high impact than a conservative one,
-
-
www.jstor.org.ezproxy.alu.talonline.ca www.jstor.org.ezproxy.alu.talonline.ca
-
Moreover, their research is cited by a more diverse set of journals, both relative to controls and to the pre-appointment period.
HHMI researchers publish in more diverse set of journals after publication.
-
These keywords are also more likely to change after their HHMI appointment.
HHMI researchers change keywords (showing sign of intellectual development) after funding.
-
We show that the work of HHMI investigators is characterized by more novel keywords than controls.
HHMI researchers use more novel keywords than NIH.
-
ur results provide support for the hypothesis that appropriately designed incentives stimulate exploration. In particular, we find that the effect of selection into the HHMI program increases as we examine higher quantiles of the distribution of citations. Relative to early career prize winners (ECPWs), our preferred econometric estimates imply that the program increases overall publication output by 39%; the magnitude jumps to 96% when focusing on the number of publications in the top percentile of the citation distribution. Success is also more frequent among HHMI investigators when assessed with respect to scientists' own citation impact prior to appointment, rather than relative to a universal citation benchmark. Symmetrically, we also uncover robust evidence that HHMI-supported scientists "flop" more often than ECPWs: they publish 35% more articles that fail to clear the (vintage-adjusted) citation bar of their least well cited pre-appointment work. This provides suggestive evidence that HHMI investigators are not simply rising stars annointed by the program. Rather, they appear to place more risky scientific bets after their appointment, as theory would suggest.
Howard Hughes Medical Institute (HMMI) researchers are funded as people rather than projects (compared to NIH researchers). Result is that they therefore both produce higher impact papers and more "flops" (i.e. low impact papers).
-
In 1980, a scientist from the University of Utah, Mario Capecchi, applied for a grant at the National Institutes of Health (NIH). The application contained three projects. The NIH peer reviewers liked the first two projects, which were building on Capecchi's past research efforts, but they were unanimously negative in their appraisal of the third project, in which he proposed to develop gene targeting in mammalian cells. They deemed the probability that the newly introduced DNA would ever find its matching sequence within the host genome vanishingly small and the experiments not worthy of pursuit. The NIH funded the grant despite this misgiving, but strongly
Gene targetting (knock out genes) almost turned down by NIH.
-
Azoulay, Pierre, Joshua S. Graff Zivin, and Gustavo Manso. 2011. “Incentives and Creativity: Evidence from the Academic Life Sciences.” The Rand Journal of Economics 42 (3): 527–54.
-
-
fg2fy8yh7d.search.serialssolutions.com fg2fy8yh7d.search.serialssolutions.com
-
Azoulay, Pierre, Joshua S. Graff Zivin, and Gustavo Manso. 2011. “Incentives and Creativity: Evidence from the Academic Life Sciences.” The Rand Journal of Economics 42 (3): 527–54.
-
-
Local file Local file
-
Foster, Jacob G., Andrey Rzhetsky, and James A. Evans. 2015. “Tradition and Innovation in Scientists’ Research Strategies.” American Sociological Review 80 (5): 875–908. doi:10.1177/0003122415601618.
-
An innovative publication is more likely to achieve high impact than a conservative one
I wonder exactly how they mean this. In fact, it is less likely to get published and Ioannidis et al 2014 would argue that it is actuallyu more conservative work that tends to get cited more.
Ioannidis, John P. A., Kevin W. Boyack, Henry Small, Aaron A. Sorensen, and Richard Klavans. 2014. “Bibliometrics: Is Your Most Cited Work Your Best?” Nature 514 (7524): 561–62. doi:10.1038/514561a.
-
-
www.federalreserve.gov www.federalreserve.gov
-
We attempt to replicate 67 papers published in 13 well-regarded economics journalsusing author-provided replication files that include both data and code. Some journalsin our sample require data and code replication files, and other journals do not requiresuch files. Aside from 6 papers that use confidential data, we obtain data and codereplication files for 29 of 35 papers (83%) that are required to provide such files as acondition of publication, compared to 11 of 26 papers (42%) that are not required toprovide data and code replication files. We successfully replicate the key qualitativeresult of 22 of 67 papers (33%) without contacting the authors. Excluding the 6 papersthat use confidential data and the 2 papers that use software we do not possess, wereplicate 29 of 59 papers (49%) with assistance from the authors. Because we areable to replicate less than half of the papers in our sample even with help from theauthors, we assert that economics research is usually not replicable. We conclude withrecommendations on improving replication of economics research
33% of papers can be replicated with data from authors without correspondence; 49% when you include the help of the authors.
-
Chang, Andrew C., and Phillip Li. 2015. “Is Economics Research Replicable? Sixty Published Papers from Thirteen Journals Say ‘Usually Not.’” 2015-083. Finance and Economics Discussion Series. Washington: Board of Governors of the Federal Reserve System. http://www.federalreserve.gov/econresdata/feds/2015/files/2015083pap.pdf.
About reproducibility in Economics.
-
-
www.psychfiledrawer.org www.psychfiledrawer.org
-
The "file drawer problem" (a term coined in 1979 by Robert Rosenthal, a member of our Advisory Board) refers to the bias introduced into the scientific literature by selective publication--chiefly by a tendency to publish positive results but not to publish negative or nonconfirmatory results
Positive bias in journals: known as "the "file Drawer Problem."
-
-
www.newscientist.com www.newscientist.com
-
My feeling is that the whole system is out of date and comes from a time when journal space was limited
Claim that bias against replication is a scarcity issue.
-
-
psychsciencenotes.blogspot.com psychsciencenotes.blogspot.com
-
Replication is key in science. Findings become robust and reliable only once they have survived various attempts to break them. Bem's second favour is to expose the well known secret that major journals simply won't publish replications. This is a real problem: in this age of Research Excellence Frameworks and other assessments, the pressure is on people to publish in high impact journals. Careful replication of controversial results is therefore good science but bad research strategy under these pressures, so these replications are unlikely to ever get run. Even when they do get run, they don't get published, further reducing the incentive to run these studies next time. The field is left with a series of "exciting" results dangling in mid-air, connected only to other studies run in the same lab.
On connection between lack of replication and the Research Excellence Framework.
-
It's back on my radar because several psychologists, including Richard Wiseman, recently submitted a failure to replicate the studies to the Journal of Personality & Social Psychology (JPSP), which is where Bem published his work. As reported here, Eliot Smith, the editor, refused to even send this (and another, successful replication as well) out for review. The reason Smith gives is that JPSP is not in the business of publishing mere replications - it prioritises novel results, and he suggests the authors take their work to other (presumably lesser) journals. This is nothing new - flagship journals like JPSP all have policies in place like this. But it's not a good look, and it got me thinking.
On how the Journal of Personality and Social Psychology refuses to publish replication studies that refute their previous publications.
-
-
blogs.discovermagazine.com blogs.discovermagazine.com
-
Scientists get criticised for not carrying out enough replications – there is little glory, after all, in merely duplicating old ground rather than forging new ones. Science journals get criticised for not publishing these attempts. Science journalists get criticised for not covering them. This is partly why I covered Doyen’s study in the first place. In light of this “file drawer problem”, you might have thought that replication attempts would be welcome
On why replications don't happen.
-
-
www.nature.com.ezproxy.alu.talonline.ca www.nature.com.ezproxy.alu.talonline.ca
-
These problems occur throughout the sciences, but psychology has a number of deeply entrenched cultural norms that exacerbate them. It has become common practice, for example, to tweak experimental designs in ways that practically guarantee positive results. And once positive results are published, few researchers replicate the experiment exactly, instead carrying out 'conceptual replications' that test similar hypotheses using different methods. This practice, say critics, builds a house of cards on potentially shaky foundations.
Why replication is particularly difficult in Psych.
-
Three research teams independently tried to replicate the effect Bem had reported and, when they could not, they faced serious obstacles to publishing their results
Studies that fail to replicate find it difficult to get published.
-
-
books.google.ca books.google.ca
-
65 Weller 1996 discovered that between 15.7 and 20.8% or published articles had been previously rejected.
Weller, A C. 1996. “A Comparison of Authors Publishing in Two Groups of U.S. Medical Journals.” Bulletin of the Medical Library Association 84 (3): 359–66.
-
62 Campanario 1995 studies rejection of nobel prize-winning work (see also Campanario 2009). See also Gans and Shepherd 1994 (for economists).
Campanario, Juan Miguel. 1995. “Commentary On Influential Books and Journal Articles Initially Rejected Because of Negative Referees’ Evaluations.” Science Communication 16 (3): 304–25. doi:10.1177/1075547095016003004.
Campanario, Juan Miguel. 2009. “Rejecting and Resisting Nobel Class Discoveries: Accounts by Nobel Laureates.” Scientometrics 81 (2): 549–65. doi:10.1007/s11192-008-2141-5.
Gans, Joshua S., and George B. Shepherd. 1994. “How Are the Mighty Fallen: Rejected Classic Articles by Leading Economists.” The Journal of Economic Perspectives (1986-1998) 8 (1): 165.
-
51 campanario 1996 studies highly cited papers that were initially rejected. Compare Ioannidis et al 2014.
Campanario, Juan Miguel. 1996. “Have Referees Rejected Some of the Most-Cited Articles of All Times?” Journal of the American Society for Information Science (1986-1998) 47 (4): 302–10.
Ioannidis, John P. A., Kevin W. Boyack, Henry Small, Aaron A. Sorensen, and Richard Klavans. 2014. “Bibliometrics: Is Your Most Cited Work Your Best?” Nature 514 (7524): 561–62. doi:10.1038/514561a.
-
47 Willis and Bobys 1983 the "only study of rejection letters."
Willis, Cecil L., and Richard S. Bobys. 1983. “Perishing in Publishing: An Analysis of Manuscript Rejection Letters.” The Wisconsin Sociologist 20 (4): 84–91.
-
Earliest study of rejection rates at journal was 43-44. Goodrich 1945: study of rejection rate at American Sociological review. No peer review; editors admited unconscious bias towards prestigious universities; used membership in society as optional criteria for acceptance.
Goodrich, Dorris West. 1945. “An Analysis of Manuscripts Received by the Editors of the American Sociological Review from May 1, 1944 to September 1, 1945.” American Sociological Review 10 (6): 716–25. doi:10.2307/2085841.
-
43 Garvey Linn and Tomita 1972 discovered that almost 1/3 of authors who had a paper rejected had "abandoned the subject matter area of their articles" within a year (p. 214).
Garvey, William D., Nan Lin, and Kazuo Tomita. 1972. “Research Studies in Patterns of Scientific Communication: III. Information-Exchange Processes Associated with the Production of Journal Articles.” Information Storage and Retrieval 8 (5): 207–21. doi:10.1016/0020-0271(72)90031-9.
-
44-45 Ingelfinger rule: won't publish articles that have been presented, discussed with reporters, or published in any form elsewhere--including data. Once a paper is under consideration and production, it can't be discussed with reporters.
This clearly harms science in the interest of journals.
-
68 Are rejected articles submitted to "lower quality" journals?
In Radiology: 13% of MSS rejected by AJR were published in journals with higher impact factor; 19% in journals with larger circulation. In LIS: 17.6% of articles rejected by Journal of Documentation were published in higher impact journal. In Biomedicine, there is more statis: between 44% and 67% of rejected articles end up in similar (or higher) journal. Only 20.6% went to a lower journal.
Study of biochemists found that 70% or rejected articles were later published in high-ranked journal.
-
67 Weller's summary of statistics:
"Over 20% of articles published were previously rejected and over 50% of rejected manuscripts were eventually published."
-
67 Wilson 1978 discovered that only 1/6 of subsequently published articles from Journal of Clinical Investigation were revised after rejection (calls into question whether review process helps refine papers).
On other hand, Yankauer 1985 found that "just over half of 61 manuscripts" submitted to American Journal of Public Health" were moderately or substantially revised before submission to a different journal.
-
Gavery et al. 1972 tracked how many submissions rejected articles underwent before acceptance:
90% 2 submissions (i.e. accepted by second journal) 1.3% 4/4+ submissions.
Each rejection added about 3 months to publication time.
-
64-65 between 27.7 and 85% (avg 51.4%) of rejected articles are published elsewhere according to various studies of editors.
These are likely minimum %, since articles can be hard to find and identity can be hard to account for or determine.
A study focussing on Authors found that between 12-35% (average of 21.6%) of rejected articles were published in end.
-
52 another table of reasons for rejection.
-
50 table of reasons for rejection in several fields.
-
48 Agronomists experienced average of 1.9 rejections; 90% of authors listed in Social and Behavioural abstracts experienced rejection; survey of economists reveals 85% admit to one rejection.
-
47 only 57% of rejection letters in Willis and Bobys contained even excerpts from reviewers. Out of 350 letters, length varied from 24 words to 480.
-
47-48. Two medical journal editors cite scarcity of page space as the cause for rejection.
-
About 56% of papers rejected are ultimately published (p. 66).
-
-
www.sciencedirect.com.ezproxy.alu.talonline.ca www.sciencedirect.com.ezproxy.alu.talonline.ca
-
Interesting papers are often highly cited by the scientific community regardless of journal.
Again,I'd like to know his evidence for this. It seems to contradict Ioannidis, John P. A., Kevin W. Boyack, Henry Small, Aaron A. Sorensen, and Richard Klavans. 2014. “Bibliometrics: Is Your Most Cited Work Your Best?” Nature 514 (7524): 561–62. doi:10.1038/514561a.
-
To calculate the opportunity cost in citations as a result of delayed publication, I analyzed my first-authored papers that were first submitted to a high-impact journal and rejected. For each paper, there is a linear or quadratic relationship (r2 > 0.97) between years since publication and its number of citations in Google Scholar. If I had submitted each paper only to the journal it was published in, this would have resulted in an earlier publication date and more time to accrue citations. I calculated this opportunity cost. For example, the equation Citations = 2.3 ∗ YEARS2 + 12.7 ∗ YEARS – 3.9 (r2 = 0.996) describes the relationship between years since publication and number of citations of my most cited paper [7]. If I had first submitted this paper to the journal where it was eventually published in and had not lost time due to rejections and resubmissions, it would have been published 1.27 years earlier and would have accumulated approximately 70 more citations. On average, each resubmitted paper accumulated 47.4 fewer citations by being published later, with an overall opportunity cost of 190 lost citations.
The opportunity cost of lost citation years caused by revision. Average time lost to resubmission? 1.27 years; average number of lost citations 47.4/paper.
-
Furthermore, the strict page limits of high-impact journals can force authors to omit valuable information [1], sometimes reducing the reach and citation rate of a paper. A rejection also forces a scientist to spend time revising and resubmitting, instead of working on new papers. The rejection–resubmission cycle sometimes demoralizes a scientist sufficiently that the paper is never published. Many of these papers have valuable ecology, natural history and life history data, often from the developing world, that are essential for improving the biodiversity databases and the global analyses based on them [7].
Rejection causes lost data, often from the developing world.
-
even when published in a lower-impact journal, good papers are usually recognized quickly by the scientific community, often achieving citation rates higher than those of high-impact journals
I wonder what evidence there is for this claim?
-
-
thewinnower.com thewinnower.com
-
Çağan’s analysis shows quite unequivocally: the citation costs clearly outweigh the potential benefits of the rejection-resubmission cycle.
On the costs of the rejection cycle.
-
-
Local file Local fileuntitled2
-
Brief discussion of cost of rejection (in terms of price of article published) on p. 297.
-
Waltham, Mary. 2010. “The Future of Scholarly Journal Publishing among Social Science and Humanities Associations.” Journal of Scholarly Publishing 41 (3): 257–324.
-
-
repository.jisc.ac.uk repository.jisc.ac.uk
-
No real discussion of the cost of rejection.
-
Swan, A. 2010. “Modelling Scholarly Communication Options: Costs and Benefits for Universities.” Programme/Project deposit. February 25. http://repository.jisc.ac.uk/442/.
-
-
onlinelibrary.wiley.com.ezproxy.alu.talonline.ca onlinelibrary.wiley.com.ezproxy.alu.talonline.ca
-
Rowland, Fytton. 2002. “The Peer-Review Process.” Learned Publishing 15 (4): 247–58. doi:10.1087/095315102760319206.
-
Rowland on cost of rejection (253):
"One important variable is the rejection rate. The amount of work (and thus cost) entailed in rejecting a paper is essentially the same as in accepting one. So if a journal has a rejection rate of 80% and each paper costs £100 to referee, the refereeing cost per paper published is £500."
-
-
Local file Local fileWelcome4
-
Morris, Sally. 2005. The True Costs of Scholarly Journal Publishing. Learned Publishing 18, no. 2: 115-126.
-
Rowland, Fytton. 2002. The Peer-Review Process. Learned Publishing 15, no. 4: 247-258.
-
Mark Ware Consulting Ltd. 2008. Peer Review in Scholarly Journals: Perspective of the Scholarly Community— an International Study. Bristol, UK: Publishing Research Consortium. http://www.publishingresearch.net/documents/PeerReviewFullPRCReport-final.pdf.
-
Research Information Network. 2008. Activities, Costs and Funding Flows in the Scholarly Communications System in the UK. London, UK: RIN. http://www.rin.ac.uk/our-work/communicating-and-disseminating-research/activities-costs-and-funding-flows-scholarly-commu.
-
-
onlinelibrary.wiley.com.ezproxy.alu.talonline.ca onlinelibrary.wiley.com.ezproxy.alu.talonline.ca
-
On the cost rejected papers add to the peer review system (p. 119): "The cost to the academic community of refereeing was estimated by Tenopir and King in 1997 to be $480/article (based on an average time 3–6 hours per article by each of 2–3 referees). At 2004 levels this is approximately $540 per submitted article. Clearly,the percentage ofpapers which are rejected makes a difference to the over- all cost to the journal; in a reasonable quality journal at least 50% of papers will be rejected, while some top journals (e.g. Nature) may reject as many as 90%. Most articles get published somewhere, and as they work their way through the system, being refereed for different journals, they accumulate additional cost; indeed, it couldbe said that a poor (or, at least, inappropriately submitted) article costs the system much more overall than does a good one."
-
-
www.nature.com www.nature.com
-
Red flags in review Signs that an author might be trying to game the system A handful of researchers have exploited loopholes in peer-review systems to ensure that they review their own papers. Here are a few signs that should raise suspicions. The author asks to exclude some reviewers, then provides a list of almost every scientist in the field. The author recommends reviewers who are strangely difficult to find online. The author provides Gmail, Yahoo or other free e-mail addresses to contact suggested reviewers, rather than e-mail addresses from an academic institution. Within hours of being requested, the reviews come back. They are glowing. Even reviewer number three likes the paper.
Red flags for review fraud for journal editors.
-
This article discusses the case of Hyung-In Moon, who submitted fake email addresses for suggested referees (i.e. email addresses that referred back to himself).
-
-
www-ncbi-nlm-nih-gov.stanford.idm.oclc.org www-ncbi-nlm-nih-gov.stanford.idm.oclc.org
-
Adopt preferred publication of negative over positive results; require very demanding reproducibility criteria before publishing positive results
If you think about it, it is interesting that normal bias works the other way: in theory it should be harder publish positive results (i.e. you need a higher standard of evidence) than negative. But it isn't.
-
Accept the current system as having evolved to be the optimal solution to complex and competing problems.Promote rapid, digital publication of all articles that contain no flaws, irrespective of perceived “importance”.Adopt preferred publication of negative over positive results; require very demanding reproducibility criteria before publishing positive results.Select articles for publication in highly visible venues based on the quality of study methods, their rigorous implementation, and astute interpretation, irrespective of results.Adopt formal post-publication downward adjustment of claims of papers published in prestigious journals.Modify current practice to elevate and incorporate more expansive data to accompany print articles or to be accessible in attractive formats associated with high-quality journals: combine the “magazine” and “archive” roles of journals.Promote critical reviews, digests, and summaries of the large amounts of biomedical data now generated.Offer disincentives to herding and incentives for truly independent, novel, or heuristic scientific work.Recognise explicitly and respond to the branding role of journal publication in career development and funding decisions.Modulate publication practices based on empirical research, which might address correlates of long-term successful outcomes (such as reproducibility, applicability, opening new avenues) of published papers.
Solutions to the problems of artificial scarcity.
-
scientists “selling” manuscripts.
Scientific authors are the vendors of articles. (though they are also analysed in this article as auction-style buyers of space).
-
The authority of journals increasingly derives from their selectivity. The venue of publication provides a valuable status signal. A common excuse for rejection is selectivity based on a limitation ironically irrelevant in the modern age—printed page space. This is essentially an example of artificial scarcity. Artificial scarcity refers to any situation where, even though a commodity exists in abundance, restrictions of access, distribution, or availability make it seem rare, and thus overpriced. Low acceptance rates create an illusion of exclusivity based on merit and more frenzied competition among scientists “selling” manuscripts.
On the artificial scarcity of selectivity in journals.
-
Some unfavourable consequences may be predicted and some are visible. Resource allocation has long been recognised by economists as problematic in science, especially in basic research where the risks are the greatest. Rival teams undertake unduly dubious and overly similar projects; and too many are attracted to one particular contest to the neglect of other areas, reducing the diversity of areas under exploration [39]
Because of this scarcity, you get "herd" behaviour as teams are attracted to popular areas of research and neglect others. This is an inefficient allocation of resources.
-
Impact factors are widely adopted as criteria for success, despite whatever qualms have been expressed [27–32]. They powerfully discriminate against submission to most journals, restricting acceptable outlets for publication. “Gaming” of impact factors is explicit. Editors make estimates of likely citations for submitted articles to gauge their interest in publication. The citation game [33,34] has created distinct hierarchical relationships among journals in different fields. In scientific fields with many citations, very few leading journals concentrate the top-cited work [35]: in each of the seven large fields to which the life sciences are divided by ISI Essential Indicators (each including several hundreds of journals), six journals account for 68%–94% of the 100 most-cited articles in the last decade (Clinical Medicine 83%, Immunology 94%, Biochemistry and Biology 68%, Molecular Biology and Genetics 85%, Neurosciences 72%, Microbiology 76%, Pharmacology/Toxicology 72%).
Impact factors strongly distort the publishing world; encourage gaming; create scarcity.
-
The scientific publishing industry is used for career advancement [36]: publication in specific journals provides scientists with a status signal. As with other luxury items intentionally kept in short supply, there is a motivation to restrict access [37,38].
On high impact journals as "luxury good"--meaning that there is an incentive to create scarcity.
-
The acceptance rate decreases by 5.3% with doubling of circulation, and circulation rates differ by over 1,000-fold among 114 journals publishing clinical research [25]
Acceptance rates fall 5.3% per doubling of circulation.
-
Constriction on the demand side is further exaggerated by the disproportionate prominence of a very few journals. Moreover, these journals strive to attract specific papers, such as influential trials that generate publicity and profitable reprint sales. This “winner-take-all” reward structure [24] leaves very little space for “successful publication” for the vast majority of scientific work and further exaggerates the winner's curse.
How a focus on excellence exaggerates "the Winner's Curse" (i.e. inflation of the "cost" paid for a successful article, on analogy to auction theory, in terms of inflated results).
-
A signalling benefit from the market—good scientists being identified by their positive results—may be more powerful in the basic biological sciences than in clinical research, where the consequences of incorrect assessment of positive results are more dire.
Positive results are a signalling benefit.
-
In the basic biological sciences, statistical considerations are secondary or nonexistent, results entirely unpredicted by hypotheses are celebrated, and there are few formal rules for reproducibility
HARKing the norm in basic biology.
-
Negative or contradictory data may be discussed at conferences or among colleagues, but surface more publicly only when dominant paradigms are replaced.
Negative data is published only when dominant paradigms are replaced. Cf. Kuhn.
-
More alarming is the general paucity in the literature of negative data. In some fields, almost all published studies show formally significant results so that statistical significance no longer appears discriminating [15,16].
In some fields positive bias is so strong that statistical significance is not a discriminator in published results.
-
An empirical evaluation of the 49 most-cited papers on the effectiveness of medical interventions, published in highly visible journals in 1990–2004, showed that a quarter of the randomised trials and five of six non-randomised studies had already been contradicted or found to have been exaggerated by 2005 [9]
Strong evidence of the decay effect.
-
The scarcity of available outlets is artificial, based on the costs of printing in an electronic age and a belief that selectivity is equivalent to quality
On the artificiality of the scarcity.
-
The self-correcting mechanism in science is retarded by the extreme imbalance between the abundance of supply (the output of basic science laboratories and clinical investigations) and the increasingly limited venues for publication (journals with sufficiently high impact).
Ties positive bias in publishing to scarcity--of high impact journals.
-
-
www.tandfonline.com www.tandfonline.com
-
Journal that bans the use of P-values and related measures!
-
-
-
Scandals permeate social and economic life, but their consequences have received scant attention in the economics literature. To shed empirical light on this phenomenon, we investigate how the scientific community's perception of a scientist's prior work changes when one of his articles is retracted. Relative to non-retracted control authors, faculty members who experience a retraction see the citation rate to their earlier, non-retracted articles drop by 10% on average, consistent with the Bayesian intuition that the market inferred their work was mediocre all along. We then investigate whether the eminence of the retracted author and the publicity surrounding the retraction shape the magnitude of the penalty. We find that eminent scientists are more harshly penalized than their less distinguished peers in the wake of a retraction, but only in cases involving fraud or misconduct. When the retraction event had its source in “honest mistakes,” we find no evidence of differential stigma between high- and low-status faculty members.
Retraction lowers citation rate for other papers by the same authors by about 10% on average. For more eminent authors, the penalty is greater, but only for retractions due to misconduct.
-
-
europepmc.org europepmc.org
-
Predictably, the strongest correlations were between Disruptive Innovativeness and Surprise, and between Surprise and Publication Difficulty
Strongest correlations were between disruptiveness (i.e. novelty) and difficulty of publication!
-
Overall, 123 scientists responded, scoring 1,214 papers between them. On average, investigators tended to give their blockbuster papers high scores for dimensions that reflect evolution:
Most of the most highly cited papers were evolutionary rather than revolutionary (i.e. in Kuhnian terms, "mopping up," "Normal" science rather than paradigm shifting.
-
Twenty scientists (16%) felt that their most important paper published in 2005–08 was not among their top ten most cited. However, most of these 20 papers were still heavily cited (on average in the top 3% published in the same year in terms of citations; seven were in the top 15 papers that the author published in 2005–08). Authors scored these papers higher for Disruptive Innovativeness (in nine cases) and Surprise (in five cases) than their ten most-cited papers.
When scientists said that their most important papers were not among their top ten, they tended to rank them higher on revolutionary scales than evolutionary.
-
They also indicated that blockbuster papers were easy to publish, with some exceptions.
Most highly cited papers were easy to publish, with some exceptions.
-
We got some intriguing feedback. The vast majority of this elite group felt that their most important paper was indeed one of their most-cited ones. Yet they described most of their chart-topping work as evolutionary, not revolutionary.
Top cited biologists describe their most important (i.e. most cited) work as the result of an evolutionary not revolutionary process.
Tags
Annotators
URL
-
-
www.nature.com.ezproxy.alu.talonline.ca www.nature.com.ezproxy.alu.talonline.ca
-
We feel that by allowing grant holders to serve as grant reviewers, a conflict of interest becomes inescapable. Exceptional creative ideas may have difficulty surviving in such a networked system. Scientists who think creatively may be discouraged by the funding process and outcomes, or might not have time to contribute as reviewers to a process that is arduous and not perfectly meritocratic.
On potential conflict of interest in allowing grant holders to serve as grant reviewers (i.e. gate keepers).
-
If NIH study-section members are well-funded but not substantially cited, this could suggest a double problem: not only do the most highly cited authors not get funded, but worse, those who influence the funding process are not among those who drive the scientific literature. We thus examined a random sample of 100 NIH study-section members. Not surprisingly, 83% were currently funded by the NIH. The citation impact of the 100 NIH study-section members was usually good or very good, but not exceptional: the most highly cited paper they had ever published as single, first or last author had received a median of 136 (90–229) citations and most were already mid- or late-career researchers (80% were associate or full professors). Only 1 of the 100 had ever published a paper with 1,000 or more citations as single, first or last author (see Appendix 1 of Supplementary Information for additional citation metrics).This overall picture (see 'Is funding tied to impact?') might, in part, be explained by the NIH policy to try to recruit reviewers who are successful in securing grants (see go.nature.com/kgtlrm). Even so, it is worrying that the majority of highly cited investigators do not have current NIH funding as principal investigators.
More about the connection between serving on a panel and getting funding even with few 1000+ citation papers.
-
We found that the grants of study-section members were more similar to other currently funded NIH grants than were non-members' grants (median score 421.9 versus 387.6, p = 0.039). This could suggest that study-section members fund work that is more similar to their own, or that they are chosen to serve as study-section members because of similarities between their own and funded grants.
Members of "study-section" (i.e. panels?) are better at getting grants, but not particularly well cited on average.
-
There are probably many reasons why highly cited scientists do not have current funding. They might have changed careers or moved to industry, for instance. Perhaps they are receiving some funding as co-investigators, or are still young and have just started their own lab. But the NIH's mandate is to fund “the best science, by the best scientists” — regardless of age or employment sector. We think our findings suggest that this aim is not being met.
The NIH's mandate is to fund "the best science by the best scientists"--so while there might be many reasons that the most highly cited authors are not receiving funding, it looks like the NIH should be funding them.
-
However, concern is growing in the scientific community that funding systems based on peer review, such as those currently used by the NIH, encourage conformity if not mediocrity, and that such systems may ignore truly innovative thinkers2, 3, 4. One tantalizing question is whether biomedical researchers who do the most influential scientific work get funded by the NIH.The influence of scientific work is difficult to measure, and one might have to wait a long time to understand it5. One proxy measurement is the number of citations that scientific publications receive6. Using citation metrics to appraise scientists and their work has many pitfalls7, and ranking people on the basis of modest differences in metrics is precarious. However, one uncontestable fact is that highly cited papers (and thus their authors) have had a major influence, for whatever reason, on the evolution of scientific debate and on the practice of science.To explore the link between highly cited research and NIH funding, we evaluated scientists who have published papers since 2001 — as first, last or single authors — that have so far received 1,000 citations or more. We found that three out of five authors of these influential papers do not currently have NIH funding as principal investigators. Conversely, we found that a large majority of the current members of NIH study sections — the people who recommend which grants to fund — do have NIH funding for their work irrespective of their citation impact, which is typically modest.
Compares authors of extremely highly cited papers (i.e. 1000+ citations) against NIH PIs and vice versa.
-
Too many US authors of the most innovative and influential papers in the life sciences do not receive NIH funding, contend Joshua M. Nicholson and John P. A. Ioannidis.
Argues that game-playing is better predictor of success at NIH than citation count.
-
-
bjoern.brembs.net bjoern.brembs.net
-
While Nature felt they had already written enough about how the high-ranking journals publish unreliable research, Science had the impression the topic of journal rank and how it threatens the entire scientific enterprise was not general enough for their readership. Since there are not that many general science journals with sections fitting a review like ours, we next went to PLoS Biology. There, at least, the responsible editor, Catriona MacCallum (whom I respect very much and who is exceedingly likable) sent our manuscript out for review. To our surprise, the reviewers essentially agreed with Nature, that there wasn’t anything new in our conclusions: everybody already knows that high-ranking journals publish unreliable science, e.g.:
Shows how a paper about how poor journal ranking is was rejected repeatedly because reviewers and journals thought it was old news!
-
-
journal.frontiersin.org journal.frontiersin.org
-
The common pattern seen where the decline effect has been documented is one of an initial publication in a high-ranking journal, followed by attempts at replication in lower-ranked journals which either failed to replicate the original findings, or suggested a much weaker effect (Lehrer, 2010).
A profoundly ironic reference: Jonah Lehrer is the New Yorker journalist who was found to have made up a Bob Dylan quotation. https://en.wikipedia.org/wiki/Jonah_Lehrer#Plagiarism_and_quote_fabrication_scandal
-
Publication bias is also exacerbated by a tendency for journals to be less likely to publish replication studies (or, worse still, failures to replicate)
On how journals don't like to publish replications, let alone falsifications.
-
Some journals are devoted to publishing null results, or have sections devoted to these, but coverage is uneven across disciplines and often these are not particularly high-ranking or well-read (Schooler, 2011; Nosek et al., 2012)
Journals (or sections of journals) that publish null results are not high ranking or widely read.
-
However, a less readily quantified but more frequent phenomenon (compared to rare retractions) has recently garnered attention, which calls into question the effectiveness of this training. The “decline-effect,” which is now well-described, relates to the observation that the strength of evidence for a particular finding often declines over time (Simmons et al., 1999; Palmer, 2000; Møller and Jennions, 2001; Ioannidis, 2005b; Møller et al., 2005; Fanelli, 2010; Lehrer, 2010; Schooler, 2011; Simmons et al., 2011; Van Dongen, 2011; Bertamini and Munafo, 2012; Gonon et al., 2012). This effect provides wider scope for assessing the unreliability of scientific research than retractions alone, and allows for more general conclusions to be drawn.
Discusses the "Decline effect"--which refers to the observed strength of evidence for a particular finding through time.
-
At the same time, the current publication system is being used to structure the careers of the members of the scientific community by evaluating their success in obtaining publications in high-ranking journals. The hierarchical publication system (“journal rank”) used to communicate scientific results is thus central, not only to the composition of the scientific community at large (by selecting its members), but also to science's position in society.
Point out that the publication system is not only about disseminating science but structuring the careers of scientists.
-
Science is not a system of certain, or established, statements
Popper on how science is not about absolute truth.
-
we suggest that abandoning journals altogether, in favor of a library-based scholarly communication system, will ultimately be necessary
Since any journal ranking system will negatively impact scientific practice (according to the authors),the journal system needs to be abandoned in favour of a "library-based scholarly communication system."
-
-
www.badscience.net www.badscience.net
-
Now the study has been replicated. Three academics – Stuart Richie, Chris French, and Richard Wiseman – have re-run three of these backwards experiments, just as Bem ran them, and found no evidence of precognition. They submitted their negative results to the Journal of Personality and Social Psychology, which published Bem’s paper last year, and the journal rejected their paper out of hand. We never, they explained, publish studies that replicate other work.
Journal of Personality and Social Psychology turns down paper falsifying the results of a previous, counter-intuitive paper, because "we never, they [the journal] explained, publish studies that replicate other work."
-
-
-
In 2011, two American microbiologists looked at the rate of retractions among journals and found a strong correlation between the journal impact factor and the rate of retractions. You'll notice that some of the highest-impact journals (the New England Journal of Medicine, Science, and Cell) are furthest along in the "retraction index," a measure of the rate of retractions:
Correlation between prestige and retractions.
-
-
bjoern.brembs.net bjoern.brembs.net
-
a) Non-retracted experiments reported in high-ranking journals are no more methodologically sound than those published in other journals. b) Non-retracted experiments reported in high-ranking journals are less methodologically sound than those published in other journals
Hence, in these six areas, unconfounded data covering orders of magnitude more material than the confounded retraction data are evenly split between results that show: a) Non-retracted experiments reported in high-ranking journals are no more methodologically sound than those published in other journals. b) Non-retracted experiments reported in high-ranking journals are less methodologically sound than those published in other journals
-
Presents an amazing list of places where top journals come out worse than lower ranked ones.
- Criteria for evidence-based medicine are no more likely to be met in higher vs. lower ranking journals;
- There is no correlation between statistical power and journal rank in neuroscience studies:
- Higher ranking journals tend to publish overestimates of true effect sizes from experiments where the sample sizes are too low in gene-association studies:
- Three studies analyzing replicability in biomedical research and found it to be extremely low, not even top journals stand out:
- Where quality can actually be quantified, such as in computer models of crystallography work, ‘top’ journals come out significantly worse than other journals:
tudy came out which showed that
In vivo animal experimentation studies are less randomized in higher ranking journals and the outcomes are not scored more often in blind in higher-ranking journals either:
-
Data from thousands of non-retracted articles indicate that experiments published in higher-ranking journals are less reliable than those reported in ‘lesser’ journals.
Higher ranking journals have more massaged data.
-
-
aeon.co aeon.co
-
A single paper published in Nature, Cell, Science or other elite journals can set a scientist’s entire career on secure high ground. And a researcher with a grand string of such publication pearls, as well as prestigious grants, ascends to the scientific equivalent of a rock star. This leads to extreme competition for the precious few slots, and harms collaborative science
On the disproportionate value of a prestigious publication.
-
-
www.nature.com.ezproxy.alu.talonline.ca www.nature.com.ezproxy.alu.talonline.ca
-
The right thing to do as a replicator of someone else's findings is to consult the original authors thoughtfully. If e-mails and phone calls don't solve the problems in replication, ask either to go to the original lab to reproduce the data together, or invite someone from their lab to come to yours. Of course replicators must pay for all this, but it is a small price in relation to the time one will save, or the suffering one might otherwise cause by declaring a finding irreproducible. When researchers at Amgen, a pharmaceutical company in Thousand Oaks, California, failed to replicate many important studies in preclinical cancer research, they tried to contact the authors and exchange materials. They could confirm only 11% of the papers3. I think that if more biotech companies had the patience to send someone to the original labs, perhaps the percentage of reproducibility would be much higher.
Author places an extremely high burden on replicators: pay to come to my lab or may me to come to yours.
-
People trying to repeat others' research often do not have the time, funding or resources to gain the same expertise with the experimental protocol as the original authors, who were perhaps operating under a multi-year federal grant and aiming for a high-profile publication. If a researcher spends six months, say, trying to replicate such work and reports that it is irreproducible, that can deter other scientists from pursuing a promising line of research, jeopardize the original scientists' chances of obtaining funding to continue it themselves, and potentially damage their reputations.
On how reproducibility hurts everybody's chances of getting funding.
-
So why am I concerned? Isn't reproducibility the bedrock of the scientific process? Yes, up to a point. But it is sometimes much easier not to replicate than to replicate studies, because the techniques and reagents are sophisticated, time-consuming and difficult to master. In the past ten years, every paper published on which I have been senior author has taken between four and six years to complete, and at times much longer. People in my lab often need months — if not a year — to replicate some of the experiments we have done on the roles of the microenvironment and extracellular matrix in cancer, and that includes consulting with other lab members, as well as the original authors.
Reproducibility is time consuming and requires great expertise. But see the next quotation for the real reason!
-
-
link.springer.com.ezproxy.alu.talonline.ca link.springer.com.ezproxy.alu.talonline.ca
-
research is becoming less pioneering and/or that the objectivity with which results are produced and published is decreasing.
Hypothesis is that research is becoming less pioneering!
-
Concerns that the growing competition for funding and citations might distort science are frequently discussed, but have not been verified directly. Of the hypothesized problems, perhaps the most worrying is a worsening of positive-outcome bias. A system that disfavours negative results not only distorts the scientific literature directly, but might also discourage high-risk projects and pressure scientists to fabricate and falsify their data.
A system that disfavours negative results distorts science and discourages high-risk results.
-
-
www.ncbi.nlm.nih.gov www.ncbi.nlm.nih.gov
-
Error was more common than fraud (73.5% of papers were retracted for error (or an undisclosed reason) vs 26.6% retracted for fraud). Eight reasons for retraction were identified; the most common reason was scientific mistake in 234 papers (31.5%), but 134 papers (18.1%) were retracted for ambiguous reasons. Fabrication (including data plagiarism) was more common than text plagiarism. Total papers retracted per year have increased sharply over the decade (r=0.96; p<0.001), as have retractions specifically for fraud (r=0.89; p<0.001). Journals now reach farther back in time to retract, both for fraud (r=0.87; p<0.001) and for scientific mistakes (r=0.95; p<0.001). Journals often fail to alert the naïve reader; 31.8% of retracted papers were not noted as retracted in any way.
Argues on self-reporting that 73.5% of papers were retracted for error or undisclosed reason. But cf. Resnik and Dinse who show that a minority of papers found to be the result of misconduct mention this fact: Resnik, David B., and Gregg E. Dinse. 2013. “Scientific Retractions and Corrections Related to Misconduct Findings.” Journal of Medical Ethics 39 (1): 46–50. doi:10.1136/medethics-2012-100766.
-
-
www.ncbi.nlm.nih.gov www.ncbi.nlm.nih.gov
-
Of these 119 statements, only 41.2% mentioned ethics at all (and only 32.8% named a specific ethical problem such as fabrication, falsification or plagiarism), whereas the other 58.8% described the reason for retraction or correction as error, loss of data or replication failure when misconduct was actually at issue.
The low attribution rate of retraction notices of papers published by authors convicted of research misconduct: 41.2% mention ethics at all, and only 32.8% specify what the problem was.
-
-
www.ncbi.nlm.nih.gov www.ncbi.nlm.nih.gov
-
Argues that retractions often do not mention ethical problems that led to retraction, instead focussing on "errors" or "lack of reproducibility."
-
-
retractionwatch.com retractionwatch.com
-
What made the result “weak,” in Margot’s opinion, was that the authors presented only three times when the globules were present within one or two days of a full moon. With so few data points, it’s entirely possible the results were due to chance, he said: When I read their paper, I was very surprised to find that they only had three data points and they were claiming this association with the full moon on the basis of [these] three data points. You would expect to find multiple studies in the literature with that type of coincidence with the full moon. Margot appealed the journal’s decision to reject his letter, but was turned down again. Margot’s rebuttal was later published last October in the Journal of Biological Rhythms (JBR) after being rejected by two other biology journals (in addition to Biology Letters). (One month later, he published a corrigendum that presented a more accurate way of calculating the time to a full moon, but did not alter the argument or conclusions, he said.) Catarina Rydin, a botanist from Stockholm University in Sweden and co-author of the original study, defended her paper: The reasons we found it important to report our finding was not the strong statistical power of the results, but a number of small but relevant biological observations that all point in the same direction. And, more importantly, we wanted to report this so that also other scientists can contribute new information on the topic, in Ephedra and other plants. Commenting on Margot’s criticism, Rydin added: The paper by Margot is however not very interesting in our opinion, as it does not contribute any news to science; no new data, and no errors in our calculations. That our hypothesis is based on very few data points is clear from our paper, and the risk of Type I errors is known to every scientist. Sometimes, it is important to have the courage to present also quite bold hypotheses in order for science to progress.
Some back and forth about alternative analysis of a paper: authors claim that a reanalysis that argued that the results were not statistically significant were "not interesting" because "the Risk of Type I errors" in low data point papers is "well-known."
-
Journals are inherently disinterested in negative findings
More evidence against negative results.
-
-
retractionwatch.com retractionwatch.com
-
Journals are inherently disinterested in negative findings
More reasons why people don't publish negative results (or test results).
-
-
fivethirtyeight.com fivethirtyeight.com
-
“Science is great, but it’s low-yield,” Fang told me. “Most experiments fail. That doesn’t mean the challenge isn’t worth it, but we can’t expect every dollar to turn a positive result. Most of the things you try don’t work out — that’s just the nature of the process.” Rather than merely avoiding failure, we need to court truth.
Science is "low-yield"--i.e. most experiments fail. Kuhn would argue that most that succeed are also low yield--i.e. "mopping up" Normal science rather than paradigm changing.
-
“By default, we’re biased to try and find extreme results,” Ioannidis, the Stanford meta-science researcher, told me. People want to prove something, and a negative result doesn’t satisfy that craving.
On the unsatisfactory nature of the negative result.
-
Predatory journals flourish, in part, because of the sway that publication records have when it comes to landing jobs and grants, creating incentives for researchers to pad their CVs with extra papers.
Predatory journals (and false science) are an outcome of an incentive system that rewards publication lists.
-
From 2001 to 2009, the number of retractions issued in the scientific literature rose tenfold.
10x rise in the number of retractions in scientific literature since 2001.
-
The setup was simple. Participants were all given the same data set and prompt: Do soccer referees give more red cards to dark-skinned players than light-skinned ones? They were then asked to submit their analytical approach for feedback from other teams before diving into the analysis. Twenty-nine teams with a total of 61 analysts took part. The researchers used a wide variety of methods, ranging — for those of you interested in the methodological gore — from simple linear regression techniques to complex multilevel regressions and Bayesian approaches. They also made different decisions about which secondary variables to use in their analyses. Despite analyzing the same data, the researchers got a variety of results. Twenty teams concluded that soccer referees gave more red cards to dark-skinned players, and nine teams found no significant relationship between skin color and red cards.
29 teams analysed the same data set and came up with very different results.
-
P-hacking is generally thought of as cheating, but what if we made it compulsory instead? If the purpose of studies is to push the frontiers of knowledge, then perhaps playing around with different methods shouldn’t be thought of as a dirty trick, but encouraged as a way of exploring boundaries. A recent project spearheaded by Brian Nosek, a founder of the nonprofit Center for Open Science, offered a clever way to do this.
On turning HARKing into a methodology.
-
Since publishing novel results can garner a scientist rewards such as tenure and jobs, there’s ample incentive to p-hack. Indeed, when Simonsohn analyzed the distribution of p-values in published psychology papers, he found that they were suspiciously concentrated around 0.05. “Everybody has p-hacked at least a little bit,” Simonsohn told me.
Papers concentrate around a P-value of 0.5, suggesting P-hacking is the norm.
-
As you manipulated all those variables in the p-hacking exercise above, you shaped your result by exploiting what psychologists Uri Simonsohn, Joseph Simmons and Leif Nelson call “researcher degrees of freedom,” the decisions scientists make as they conduct a study. These choices include things like which observations to record, which ones to compare, which factors to control for, or, in your case, whether to measure the economy using employment or inflation numbers (or both). Researchers often make these calls as they go, and often there’s no obviously correct way to proceed, which makes it tempting to try different things until you get the result you’re looking for.
HARKing (Hypothesis after results known): all the unquantifiable things that go into a result.
-
The p-value reveals almost nothing about the strength of the evidence, yet a p-value of 0.05 has become the ticket to get into many journals. “The dominant method used [to evaluate evidence] is the p-value,” said Michael Evans, a statistician at the University of Toronto, “and the p-value is well known not to work very well.” Scientists’ overreliance on p-values has led at least one journal to decide it has had enough of them. In February, Basic and Applied Social Psychology announced that it will no longer publish p-values. “We believe that the p < .05 bar is too easy to pass and sometimesserves as an excuse for lower quality research,”the editors wrote in their announcement. Instead of p-values, the journal will require “strong descriptive statistics, including effect sizes.”
On the misunderstanding of p-value in science.
-
Welcome to the wild world of p-hacking
A great demonstration of P-Hacking!
-
-
www.nature.com.ezproxy.alu.talonline.ca www.nature.com.ezproxy.alu.talonline.ca
-
how primitive the framework is that we use to evaluate policies and assess strength in science and technology
See Marburger III, John H. 2015. Science Policy Up Close. Harvard University Press. p. 163
-
For decades, the DOD's legacy of innovation and economic growth concealed weaknesses in the civilian agencies, which is why so many people still believe that putting more money into civilian research and development is the panacea for what ails US innovation. Former presidential science adviser John Marburger publicly blew the whistle on this simple-minded notion in 2005, when he noted "how primitive the framework is that we use to evaluate policies and assess strength in science and technology". Partly in response to Marburger's provocation, the National Science Foundation initiated its Science of Science and Innovation Policy programme, with the explicit aim of guiding effective science policy-making by creating a foundation of data, theory, methods and models. This worthy goal carries an uncomfortable implication: that the nation's civilian research and development enterprise had been built on a foundation of hidden assumptions and unsubstantiated claims.
The DOD concealed the primitive framework used by civilian agencies to evaluate policies and assess strength in R&D.
-
Meanwhile, the main civilian science agencies — the National Institutes of Health (NIH), the National Science Foundation, NASA and the Department of Energy (DOE) — developed roles in training scientists and creating knowledge that accelerated innovation. But such agencies were mere booster rockets for the DOD's main engine of innovation. They lacked, and continue to lack, the attributes that accounted for the military's successes — in particular, its focused mission, enduring ties to the private sector and role as an early customer for advanced technologies.
The US military was an ideal funder of science because it was very mission focussed and was also a consumer of early results. This is in contrast to the civilian agencies.
-
-
www.researchresearch.com www.researchresearch.comResearch3
-
And yet excellence, measured by comparison with the work of others, is still used as the pre-eminent criterion to assess researchers and grant proposals. It seems that its main purpose is to act as a vague umbrella term, under which the genuine criteria can be concealed. As such, the term ‘excellence’ provides perfect cover and fulfils its role with distinction. Which is why it is in such ascendence.
Excellence as comparison of work against others.
-
We have become complicit in a system that systematically limits our funds to a fraction of what is needed to support us.
While this may be true in an absolute sense (i.e. everybody could use more money), funding for R&D in universities has grown immensely over the last century. See slide 22 of Stephan, Paula. 2014. “How Economics Shapes Science.” Lecture presented at the ASPCT, March. http://sites.gsu.edu/pstephan/files/2014/07/How-Economics-Shapes-Science.Stephan-1dxay4y.pdf.
-
Scientists aspire to be—and drive each other to be—excellent. But excellence is a Trojan Horse that academics have accepted and taken into their midst. This is because the definition of excellence is unclear. For most people, it is one of those “I know it when I see it” things. But in policy terms this is too vague—in which case, it is clearly the top x per cent of proposals: 2, 3 or perhaps 5 per cent. This makes excellence very convenient. If an organisation commits to funding ‘excellence in science’, it has automatically committed to funding only x per cent of all the grants submitted. This saves the organisation a huge amount of money.
Because "excellence" is vaguely defined, it adopts a high rejection rate as a proxy... making science cheaper.
-
-
sites.gsu.edu sites.gsu.edu
-
Funding for University research has grown immensely.
-
-
www.nature.com.ezproxy.alu.talonline.ca www.nature.com.ezproxy.alu.talonline.ca
-
If most scientists are risk-averse, there is little chance that transformative research will occur, leading to significant returns from investments in research and development. Funding bodies sometimes give money specifically for field-changing research, but not nearly enough — Pioneer grants from the NIH fund fewer than 1% of applicants
Funding for "field-changing" research is very rare (<1% of NIH). But cf. Kuhn on "Normal Science": "Mopping-up operations are what engage most scientists throughout their careers" (24).
Kuhn, Thomas S., and Ian Hacking. 2012. The Structure of Scientific Revolutions. Fourth edition. Chicago ; London: The University of Chicago Press.
-
Other economic incentives indirectly render the scientific process less efficient — such as the tendency of scientists to avoid risk by submitting to funding organizations only those proposals that they consider 'sure bets'
Problem of scientists submitting only "sure" projects to funders.
-
cash incentives adopted by countries such as China, South Korea and Turkey encourage local scientists to submit papers to high-end journals despite the low probability of success. These payments have achieved little more than overloading reviewers, taking them away from their work, and have increased submissions by the three countries to the journal Science by 46% in recent years, with no corresponding increase in the number of publications
Cash incentives for submission in crease submission rates massively but without increase in publications. Cf. PLoS experience.
-
Incentives that encourage people to make one decision instead of another for monetary reasons play an important part in science. This is good news if the incentives are right. But if they are not, they can cause considerable damage to the scientific enterprise.
Incentives can promote science if they are well planned. But they are counterproductive and harmful if they aren't.
-
- Jan 2016
-
search.proquest.com.ezproxy.alu.talonline.ca search.proquest.com.ezproxy.alu.talonline.ca
-
Harpers 1873 the telegraph, pp. 359-360
Discusses the future importance of the telegraph in terms of its impact on knowledge: will free language from philology and allow us to make improvements on that. Mentions the beginnings of the typewriter.
"The immense extension of the general telegraphic system, and its common use for business and social correspondence and the dissemination of public intelligence, are far more important to the community than any of these incidental applications of the system. The telegraph system is extending much more rapidly than the railroad system, and is probably exerting even a greater influence upon the mental development of the people than the railroad is exerting in respect to the material and physical prosperity of the country. It has penetrated almost every mind with a new sense of the vastnessof distance and the value of time. It is commonly said that it has annihilated time and space--and this is true in a sense; but in a deeper sense it has magnified both, for it has been the means of expanding vastly the inadequate conceptions which we form of space and distance, and of giving a significance to the idea of time which it never before had to the human mind. It lifts every man who reads its messages above his own little circle, gives him in a vivid flash, as it were, a view of vast distances, and tends by an irresistible influence to make him a citizen of his country and a fellow of the race as well as a member of his local community.
In other respects its influence, though less obvious, will probably prove equally profound. So long as the mysterious force employed in the telegraph was only known in the mariner’s compass, or by scientific investigations, or in a few special processes of art, the knowledge of the electric or magnetic force had, so to speak, a very limited soil to grow in. By means of the telegraph many thousands of persons in this country are constantly employed in dealing with it practically--generating it, insulating it, manipulating it. The invention of Morse has engaged some one in every considerable town and village in studying its properties, watching its operation, and using it profitably. Nothing could be better calculated to attract general attention to this newfound power, and to disseminate that knowl edge of it from which new applications may be expected to result.
The tendency of scientific pursuits to promote the love of truth and the habit of accuracy is strikingly illustrated in the zeal and fidelity with which the minute and long-continued investigations have been pursued that have led to the development of this new realm of knowledge and this new element in human affairs.
But perhaps the most extended and important influence which the telegraph is destined to exert upon the human mind is that which it will ultimately work out through its influence upon language.
Language is the instrument of thought. It is not merely a means of expression. A word is a tool for thinking, before the thinker uses it as a signal for communicating his thought. There is no good reason why it should not be free to be improved, as other implements are. Language has hitherto been regarded merely in a historical point of view, and even now philology is little more than a record of the differences in language which have separated mankind, and of the steps of development in it which each branch of the human family has pursued. And as a whole it may be said that the science of language in the hands of philologists is used to perpetuate the differences and irregularities of speech which prevail. The telegraph is silently introducing a new element, which, we may confidently predict, will one day present this subject in a different aspect. The invention of Morse has given beyond recall the pre-eminence to the Italian alphabet, and has secured the ultimate adoption through-out the world of that system or some improvement upon it. The community of intelligence, and the necessary convertibility of expression between difl‘erent languages, which the press through the influence of the telegraph is establishing, have commenced a process of assimilation, the results of which are already striking to those who carefully examine the subject. An important event transpiring in any part of the civilized world is concisely expressed in a dispatch which is immediately reproduced in five or ten or more different languages. A comparison of such dispatches with each other will show that in them the peculiar; and local idioms of each language are to a large extent discarded. The process sifts out, as it were, the characteristic peculiarities of each language, and it may be confidently said that nowhere in literature will be found a more remarkable parallelism of structure, and even of word forms, combined with equal purity and strength in each language, than in the telegraphic columns of the leading dailies of the capitals of Europe and America. A traveler in Europe, commencing the study of the language of the country where he may be, finds no reading which he can so easily master as the telegraphic news column. The telegraph is cosmopolitan, and is rapidly giving prominence to those modes of speech in which different languages resemble each other. When we add to this the fact that every step of advance made by science and the arts increases that which different languages have in common by reason of the tendency of men in these pursuits the world over to adopt a common nomenclature, and to think alike or in similar mental processes, we see the elements already at work which will ultimately relegate philology to its proper and useful place among the departments of history, and will free language from those restrictions which now forbid making any intentional improvements in it. With the general use of the telegraphic system other things begin to readjust themselves to its conditions. Short-hand writing is more cultivated now than ever before. The best reporter must understand both systems, and be able to take his notes of a conversation while it passes, and then by stepping into an office transmit it at, once without writing out. There is now in practical use in the city of New York a little instrument the size of a sewing-machine, having a keyboard like the printing telegraph, by which any one can write in print as legibly as this page, and almost as rapidly as a reporter in short-hand. When we consider the immense number of people that every day by writing a telegram and counting the words are taking a most efficient lesson in concise composition, we see in another way the influence of this invention on the strength of language. If the companies should ever adopt the system of computing all their charges by the number of letters instead of words, as indeed they do now for all cipher or unintelligible messages, the world would very quickly be considering the economic advantages of phonetic or other improved orthography.
These processes are in operation all the world over, and in reference to the use of one and the same alphabet. By the principle which Darwin describes as natural selection short words are gaining the advantage of long words, direct forms of expression are gaining the advantage over indirect, words of precise meaning the advantage of the ambiguous, and local idioms are every where at a disadvantage. The doctrine of the Survival of the Fittest thus tends to the constant improvement and points to the ultimate unification of language.
The idea of a common language of the world, therefore, however far in the future it may be, is no longer a dream of the poet nor a scheme of a conqueror. And it is significant of the spirit of the times that this idea, once so chimerical, should at the time we are writing find expression in the inaugural of our Chief Magistrate, in his declaration of the belief “ that our Great Maker is preparing the world in His own good time to become one nation, speaking one language, and when armies and navies will be no longer required.”
-
Harpers 1873 the telegraph, pp. 334-336.
"With the exception of those general readers whose taste or course of reading has led them somewhat into scientific paths, there are not many persons who find it easy to form a definite idea of the precise mode of action by which a telegraphic wire conveys its messages--so multitudinous and varies in their character, and transmitted with such inceivable rapidity...<pb n="335"/><pb n="336"/>[stories of people looking for physical messages]. Still to the mass even of intelligent and well-informed readers, the precise mode in which the communications are made is a mystery more or less inscrutable.
The difficulty of forming a clear conception of the subject is increased by the fact that while we have to deal with novel and strange facts, we have also to use old words in novel and inconsistent senses.
-
Harper's 1873 p. 334 "The Telegraph": Mr. Orton, the president of Western Union, comments on the value of metadata as a way of gathering commercial intelligence:
"Our observer, if he could not only see the oscillations of electric condition, but also discern the meaning of the pulsations, and read the messages as they circulate, would thus have a panorama of the business and social affairs of the country passing under his eye. But the telegraphy has now become so true a representative of our life that would hardly be necessary to read the messages in order to find an indication of the state of the country. The mere degree of activity in the business uses of the telegraphy in any given direction affords an index of the prosperity of the section of the country served thereby. Mr. Orton, the president of the Western Union Company, gave a striking statement of this fact in his argument before a committee of Congress in 1870. He said: "The fact is, the telegraph lives upon commerce. It is the nervous system of the commercial system. If you will sit down with me at my office for twenty minutes, I will show you what the condition of business is at any given time in any locality in the United States. After three years of careful study of the matter, I am ready to appeal to the telegraphy receipts as a criterion under all circumstances. This last year, the grain business in the West has been very dull; as a consequence, the receipts from that section have fallen off twenty-five per cent. Business in the South has been gaining a little, month by month, for the last year or so; and now the telegraphic receipts from that quarter give stronger indications of returning prosperity than at any previous time since the war."'
-
-
search.proquest.com.ezproxy.alu.talonline.ca search.proquest.com.ezproxy.alu.talonline.ca
-
Discussion of the conceptual differences introduced by the electric telegraph. Cf. Gleick The Information, p. 150.
-
-
www.uleth.ca www.uleth.ca
-
This form is for all students that are not doing research or paid from research accounts. Please code all student payments on this form to the student account codes, 5211 – Student Positions. 5140 – Graduate Assistant, 5150 - Scholarship.
b. Non-Classified is any payment that does not fit in another category.
c. Lump sum payments are typically a flat rate for work and deemed hours are required for Employment Insurance reasons.
d. One-time payments are not employment income, Payroll reports these on a T4A.
-
-
www.uleth.ca www.uleth.ca
-
Use this form for all payments from Research accounts and for any job that is doing research even if not paid out of the Research funds including Student Research, excluding the Post Doctorial Fellowships.
-
-
www.force11.org www.force11.org
-
FORCE11 Manifesto
An interesting object lesson: I was figuring out hypothes.is and annotated the PDF rather than this. Here's the link to my comments: https://via.hypothes.is/https://www.force11.org/sites/default/files/files/Force11Manifesto20120219.pdf
-
better systems to permit collaborative work by geographically distributed colleagues; better systems to permit collaborative writing, with fail-safe versioning; better tools for richer interactive data and metadata visualization, enabling dynamic exploration; and easier data publication mechanisms, including better integration with data acquisition instrumentation, so that the process becomes automated.
What's striking to me about all these is that the first three are not really about scholComm primarily, where the last is.
-
For some reason I did the PDF rather than the HTML. Here's the link to mine: https://via.hypothes.is/https://www.force11.org/sites/default/files/files/Force11Manifesto20120219.pdf
-
Maximizing informal contacts through conferences, workshops, meetings, calls, webcasts
An argument for the conference/colloquium.
-
emergence of Force11 ̇OA provides
Broken sentence, but can't quite figure out what's wrong.
-
Not only are the products of research activity still firmly rooted in the past, so too are our means ofassessing the impact of those products and of the scholars who produce them. For five decades, theimpact of a scholarly work—an entity that is already narrowly defined, in the sciences as a journalarticle, and in the humanities as a monograph—has been judged by counting the number of citations itreceives from other scholarly works, or, worse, by attributing worth to an individual’s work based solelyon the overall impact factor of the journal in which it happens to be published. We now live in an agein which other methods of evaluation, including article-level usage metrics, blog comments, discussionon mail lists, press quotes, and other forms of media, are becoming increasingly important reflections ofscholarly and public impact. Failure to take these aspects into account means not only that the impactand/or quality of a publication is not adequately measured, but also that the current incentivization andevaluation system for scholars does not relate well to the actual impact of their activities
We've done much less on this than I'd hope. It is interesting in the document that it is a little drafty as well compared to the other sections.
-
Third, research software developers typicallywork in a competitive environment, either academic or commercial, where innovation is rewarded muchmore highly than evolutionary and collaborative software reuse. This is especially true in a fundingenvironment driven by the need for intensive innovation, where reusing other peoples’ code is a likelysource of criticism
If I can special plead, this is something I'd really like to see us pursue more... i.e. the misalignment and counterproductive aspect of academic reward systems.
-
Theadvent of the internet has greatly reduced the monetary value that can be extracted from paper-basedacademic content
Is this true? I'd have thought that, like the book industry, the internet has had a multiplier effect.
-
and so are un-likely to be a major source of continued revenue
Again, I think the real surprise is their durability as customers rather than their retreat.
-
Academic publishers have been slower to encounter, but are not immune from, the disruption that theinternet has wrought on other content industries
I'd say the real surprise is how little disruption they have faced. The relatively slow growth of Open Access, for example, is nothing like the effect Napster or iTunes had on the record companies. http://dx.doi.org/10.3998/3336451.0018.309
-
neitherhas an attractive business model
The two activities discussed here are "publishing" and "repositories"; I agree that the repository model is redundant, though there is some institutional value in collecting an institution's output together. But I think the last couple of years have show that the first activity is not without some decent business models: I think the Open Library of the Humanities, for example, shows one; the Journal Incubator, if I might be so bold, is another.
-