19 Matching Annotations
  1. Mar 2016
    1. Begg, C. B., & Berlin, J. A. (1988). Publication bias: A problem in interpreting medical data.Journal of theRoyal Statistical Society A, 151, 419–463.
    2. Gerber, A. S., & Malhotra, N. (2008). Publication bias in empirical sociological research: Do arbitrarysignificance levels distort published results?Sociological Methods & Research, 37, 3–30
    3. Gilbody, S. M., Song, F., Eastwood, A. J., & Sutton, A. (2000). The causes, consequences and detection ofpublication bias in psychiatry.Acta Psychiatrica Scandinavica, 102, 241–249
    4. Kennedy, D. (2004). The old file-drawer problem.Science, 305, 45
    5. Koletsi, D., Karagianni, A., Pandis, N., Makou, M., Polychronopolou, A., & Eliades, T. (2009). Are studiesreporting significant results more likely to be published?American Journal of Orthodontics andDentofacial Orthopedics, 136, 632e1

      positive

    6. Krzyzanowska, M. K., Pintilie, M., & Tannock, I. F. (2003). Factors associated with failure to publish largerandomized trials presented at an oncology meeting.Journal of the American Medical Association,290, 495–501
    7. Levine, T., Asada, K. J., & Carpenter, C. (2009). Sample sizes and effect sizes are negatively correlated inmeta-analyses: Evidence and implications of a publication bias against non-significant findings.Communication Monographs, 76, 286–302
    8. Rosenthal, R. (1979). The file drawer problem and tolerance for null results.Psychological Bulletin, 86,638–641

      p

    9. Song, F. J., Parekh-Bhurke, S., Hooper, L., Loke, Y. K., Ryder, J. J., Sutton, A. J., et al. (2009). Extent ofpublication bias in different categories of research cohorts: A meta-analysis of empirical studies.BMCMedical Research Methodology, 9, 79
    10. Sterling, T. D. (1959). Publication decisions and their possible effects on inferences drawn from tests ofsignificance—Or vice versa.Journal of the American Statistical Association, 54, 30–34

      publication bias

    1. Pautasso, M. (2010). Worsening file-drawer problem in the abstracts of natural, medical and social sciencedatabases.Scientometrics, 85(1), 193–202
    2. Silvertown, J., & McConway, K. J. (1997). Does ‘‘publication bias’’ lead to biased science?Oikos, 79(1),167–168.
    3. Jeng, M. (2006). A selected history of expectation bias in physics.American Journal of Physics, 74(7),578–583

      History of expectation bias in physics

    4. Ioannidis, J. P. A. (2008a). Perfect study, poor evidence: Interpretation of biases preceding study design.Seminars in Hematology, 45(3), 160–166

      effect of positive bias

    5. Feigenbaum, S., & Levy, D. M. (1996). Research bias: Some preliminary findings.Knowledge and Policy:The International Journal of Knowledge Transfer and Utilization, 9(2 & 3), 135–142.

      Positive bias

    6. Song, F., Parekh, S., Hooper, L., Loke, Y. K., Ryder, J., Sutton, A. J., et al. (2010). Dissemination andpublication of research findings: An updated review of related biases.Health Technology Assessment,14(8), 1–193. doi

      positive bias

    1. But there’s, I think there is a question of how you interpret the data, even ... ifthe experiments are very well designed. And, in terms of advice—not that I’mgoing to say that it’s shocking—but one of my mentors, whom I very muchrespect as a scientist—I think he’s extraordinarily good—advised me to alwaysput the most positive spin you can on your data. And if you try to present, like,present your data objectively, like in a job seminar, you’re guaranteed tonotgetthe job

      Importance of "spinning" data

    2. You are. And you know what the problems are in doing the experiments. And ifyou, in your mind, think that there should be one more control—because youknow this stuff better than anybody else because you’re doing it, you know—you decided not to do that, not to bring up what the potential difficulties are, youhave a better chance of getting that paper published. But it’s—I don’t think it’sthe right thing to do.

      deliberate positive bias

  2. May 2015
    1. The bias was first identified by the statistician Theodore Sterling, in 1959, after he noticed that ninety-seven per cent of all published psychological studies with statistically significant data found the effect they were looking for.

      That is rather outrageous that we've known about this since 1959 and have done nothing about it.