54 Matching Annotations
  1. Jul 2020
  2. Jun 2020
  3. May 2020
  4. Apr 2020
  5. Apr 2016
    1. in the latter both the wide differential in manuscript rejection rates and the high correlation between refereerecommendations and editorial decisions suggests that reviewers and editors agree more on acceptance than on rejection.

      In "specific and focussed" fields, the agreement tends to be more on acceptance than rejection.

    2. In the former there is also much more agreement on rejectionthan acceptance

      In "general and diffuse" fields, there is more agreement on paper rejection than in "specific and focussed."

    1. . I consider that my job, as a philosopher, is to activate the possible, and not to describe the probable, that is, to think situations with and through their unknowns when I can feel them

      The job of a philosopher is to "activate the possible, not describe the probable."

  6. Mar 2016
    1. Levine, T., Asada, K. J., & Carpenter, C. (2009). Sample sizes and effect sizes are negatively correlated inmeta-analyses: Evidence and implications of a publication bias against non-significant findings.Communication Monographs, 76, 286–302
    2. Paris, G., De Leo, G., Menozzi, P., & Gatto, M. (1998). Region-based citation bias in science.Nature, 396,6708
    3. Rosenthal, R. (1979). The file drawer problem and tolerance for null results.Psychological Bulletin, 86,638–641

      p

    4. Song, F. J., Parekh-Bhurke, S., Hooper, L., Loke, Y. K., Ryder, J. J., Sutton, A. J., et al. (2009). Extent ofpublication bias in different categories of research cohorts: A meta-analysis of empirical studies.BMCMedical Research Methodology, 9, 79
    5. Sterling, T. D. (1959). Publication decisions and their possible effects on inferences drawn from tests ofsignificance—Or vice versa.Journal of the American Statistical Association, 54, 30–34

      publication bias

    1. Osuna, C., Crux-Castro, L., & Sanz-Menedez, L. (2011). Overturning some assumptions about the effects ofevaluation systems on publication performance.Scientometrics, 86, 575–592

      evaluation systems and publication performance

    2. Pautasso, M. (2010). Worsening file-drawer problem in the abstracts of natural, medical and social sciencedatabases.Scientometrics, 85(1), 193–202
    3. Schmidt, S. (2009). Shall we really do it again? The powerful concept of replication is neglected in thesocial sciences.Review of General Psychology, 13(2), 90–100.
    4. Shelton, R. D., Foland, P., & Gorelskyy, R. (2009). Do new SCI journals have a different national bias?Scientometrics, 79(2), 351–363. doi:
    5. Silvertown, J., & McConway, K. J. (1997). Does ‘‘publication bias’’ lead to biased science?Oikos, 79(1),167–168.
    6. Yousefi-Nooraie, R., Shakiba, B., & Mortaz-Hejri, S. (2006). Country development and manuscript selec-tion bias: A review of published studies.BMC Medical Research Methodology, 6, 37

      On developing countries and science

    7. Evanschitzky, H., Baumgarth, C., Hubbard, R., & Armstrong, J. S. (2007). Replication research’s disturbingtrend.Journal of Business Research, 60(4), 411–415. doi

      replication research

    8. Jeng, M. (2006). A selected history of expectation bias in physics.American Journal of Physics, 74(7),578–583

      History of expectation bias in physics

    9. Ioannidis, J. P. A. (2008a). Perfect study, poor evidence: Interpretation of biases preceding study design.Seminars in Hematology, 45(3), 160–166

      effect of positive bias

    10. Feigenbaum, S., & Levy, D. M. (1996). Research bias: Some preliminary findings.Knowledge and Policy:The International Journal of Knowledge Transfer and Utilization, 9(2 & 3), 135–142.

      Positive bias

    11. Song, F., Parekh, S., Hooper, L., Loke, Y. K., Ryder, J., Sutton, A. J., et al. (2010). Dissemination andpublication of research findings: An updated review of related biases.Health Technology Assessment,14(8), 1–193. doi

      positive bias

    12. De Rond, M., & Miller, A. N. (2005). Publish or perish—Bane or boon of academic life?Journal ofManagement Inquiry, 14(4), 321–329. doi:

      On how increased pressure to publish diminishes creativity.

    13. Several possible problems have been hypothesised, including: undue proliferation ofpublications and atomization of results (Gad-el-Hak2004; Statzner and Resh2010);impoverishment of research creativity, favouring ‘‘normal’’ science and predictable out-comes at the expense of pioneering, high-risk studies (De Rond and Miller2005); growingjournal rejection rates and bias against negative and non-significant results (because theyattract fewer readers and citations) (Statzner and Resh2010; Lortie1999); sensationalism,inflation and over-interpretation of results (Lortie1999; Atkin2002; Ioannidis2008b);increased prevalence of research bias and misconduct (Qiu2010). Indirect empiricalevidence supports at least some of these concerns. The per-capita paper output of scientistshas increased, whilst their career duration has decreased over the last 35 years in thephysical sciences (Fronczak et al.2007). Rejection rates of papers have increased in thehigh-tier journals (Larsen and von Ins2010; Lawrence2003). Negative sentences such as‘‘non-significant difference’’ have decreased in frequency in papers’ abstracts, while catchyexpressions such as ‘‘paradigm shift’’ have increased in the titles (Pautasso2010; Atkin2002). No study, however, has yet verified directly whether the scientific literature isenduring actual changes in conten

      Good discussion (and bibliography) of problems involved in hyper competition

    14. Formann, A. K. (2008). Estimating the proportion of studies missing for meta-analysis due to publicationbias.Contemporary Clinical Trials, 29(5), 732–739. doi

      estimate of positive bias in clinical trials.

    15. Fronczak, P., Fronczak, A., & Holyst, J. A. (2007). Analysis of scientific productivity using maximumentropy principle and fluctuation-dissipation theorem.Physical Review E, 75(2), 026103. doi:10.1103/PhysRevE.75.026103.

      On rising scientific productivity over shorter careers.

    16. Atkin, P. A. (2002). A paradigm shift in the medical literature.British Medical Journal, 325(7378),1450–1451

      On the rise of sexy terms like "paradigm shift" in abstracts.

    17. Bonitz, M., & Scharnhorst, A. (2001). Competition in science and the Matthew core journals.Sciento-metrics, 51(1), 37–54

      Matthew effect

    1. To publish. And sometimes publish in the right journals.... In my discipline ...there’s just a few journals, and if you’re not in that journal, then yourpublication doesn’t really count

      Importance of "top" journals

    2. In addition to that, the other thing that they focus on is science as celebrity.... Sothe standards are, ‘‘How much did it cost, and is it in the news?’’ And if it didn’tcost much and if it is not in the news, but it got a lot of behind-the-scenes talkwithin your discipline, they don’t know that, nor do they care

      Importance of news-worthiness.

    3. You’ve got to have a billionpublications in my field. That is the bottom line. That’s the only thing that counts.You can fail to do everything else as long as you have lots and lots of papers

      Importance of publications in science--overrules everything else.

    1. The winner-take-all aspect of the priority rule has its drawbacks, however. It can encourage secrecy, sloppy practices, dishonesty and an excessive emphasis on surrogate measures of scientific quality, such as publication in high-impact journals. The editors of the journal Nature have recently exhorted scientists to take greater care in their work, citing poor reproducibility of published findings, errors in figures, improper controls, incomplete descriptions of methods and unsuitable statistical analyses as evidence of increasing sloppiness. (Scientific American is part of Nature Publishing Group.)As competition over reduced funding has increased markedly, these disadvantages of the priority rule may have begun to outweigh its benefits. Success rates for scientists applying for National Institutes of Health funding have recently reached an all-time low. As a result, we have seen a steep rise in unhealthy competition among scientists, accompanied by a dramatic proliferation in the number of scientific publications retracted because of fraud or error. Recent scandals in science are reminiscent of the doping problems in sports, in which disproportionately rich rewards going to winners has fostered cheating.

      How the priority rule is killing science.

    1. The role of external influences on the scientific enterprise must not be ignored. With funding success rates at historically low levels, scientists are under enormous pressure to produce high-impact publications and obtain research grants. The importance of these influences is reflected in the burgeoning literature on research misconduct, including surveys that suggest that approximately 2% of scientists admit to having fabricated, falsified, or inappropriately modified results at least once (24). A substantial proportion of instances of faculty misconduct involve misrepresentation of data in publications (61%) and grant applications (72%); only 3% of faculty misconduct involved neither publications nor grant applications.

      Importance of low funding rates as incitement to fraud

    2. The predominant economic system in science is “winner-take-all” (17, 18). Such a reward system has the benefit of promoting competition and the open communication of new discoveries but has many perverse effects on the scientific enterprise (19). The scientific misconduct among both male and female scientists observed in this study may well reflect a darker side of competition in science. That said, the preponderance of males committing research misconduct raises a number of interesting questions. The overrepresentation of males among scientists committing misconduct is evident, even against the backdrop of male overrepresentation among scientists, a disparity more pronounced at the highest academic ranks, a parallel with the so-called “leaky pipeline.” There are multiple factors contributing to the latter, and considerable attention has been paid to factors such as the unique challenges facing young female scientists balancing personal and career interests (20), as well as bias in hiring decisions by senior scientists, who are mostly male (21). It is quite possible that, in at least some cases, misconduct at high levels may contribute to attrition of woman from the senior ranks of academic researchers.

      Reason for fraud: winner take all

    1. Editors, Publishers, Impact Factors, and Reprint Income

      On the incentives for journal editors to publish papers they think might improve IF... and how citations are gamed.

  7. Feb 2014
    1. National governments are also weighing in on the issue. The UK government aims this April to make text-mining for non-commercial purposes exempt from copyright, allowing academics to mine any content they have paid for.

      UK government intervening to make text-mining for non-commercial purposes exempt from copyright.

    2. “Our plan is just to wait for the copyright exemption to come into law in the United Kingdom so we can do our own content-mining our own way, on our own platform, with our own tools,” says Mounce. “Our project plans to mine Elsevier’s content, but we neither want nor need the restricted service they are announcing here.”

      This seems to be a sensible move rather than be hindered not by copyright, but by the onerous contract that Elsevier wants to put in place.

    3. some researchers feel that a dangerous precedent is being set. They argue that publishers wrongly characterize text-mining as an activity that requires extra rights to be granted by licence from a copyright holder, and they feel that computational reading should require no more permission than human reading. “The right to read is the right to mine,” says Ross Mounce of the University of Bath, UK, who is using content-mining to construct maps of species’ evolutionary relationships.

      "The right to read is the right to mine."