- Jan 2024
-
peculiargenres.commons.msu.edu peculiargenres.commons.msu.edu
- Mar 2023
-
www.youtube.com www.youtube.com
-
German academic publishing in Niklas Luhmann's day was dramatically different from the late 20th/early 21st centuries. There was no peer-review and as a result Luhmann didn't have the level of gatekeeping that academics face today which only served to help increase his academic journal publication record. (28:30)
-
- Jul 2022
-
www.wikiwand.com www.wikiwand.com
-
Perhaps the most widely recognized failing of peer review is its inability to ensure the identification of high-quality work.
stakesinscience
-
- May 2022
-
Local file Local file
-
Studying, done properly, is research,because it is about gaining insight that cannot be anticipated and willbe shared within the scientific community under public scrutiny.
-
-
wiobyrne.com wiobyrne.com
-
or at least they pretend
I don't think we're pretending. I know I'm not!
-
Senior colleagues indicate that I should not have to balance out publishing in “traditional, peer-reviewed publications” as well as open, online spaces.
Do your colleagues who read your work, annotate it, and comment on it not count as peer-review?
Am I wasting my time by annotating all of this? :) (I don't think so...)
-
-
notes.knowledgefutures.org notes.knowledgefutures.org
-
He notes that authors of such projects should consider the return on investment. It take time to go through community feedback, so one needs to determine whether the pay off will be worthwhile. Nevertheless, if his next work is suitable for community review, he’d like to do it again.
This is an apropos question. It is also somewhat contingent on what sort of platform the author "owns" to be able to do outreach and drive readers and participation.
-
A short text "interview" with the authors of three works that posted versions of their books online for an open review via annotation.
These could be added to the example and experience of Kathleen Fitzpatrick.
-
-
danallosso.substack.com danallosso.substack.com
-
I returned to another OER Learning Circle and wrote an ebook version of a Modern World History textbook. As I wrote this, I tested it out on my students. I taught them to use the annotation app, Hypothesis, and assigned them to highlight and comment on the chapters each week in preparation for class discussions. This had the dual benefits of engaging them with the content, and also indicating to me which parts of the text were working well and which needed improvement. Since I wasn't telling them what they had to highlight and respond to, I was able to see what elements caught students attention and interest. And possibly more important, I was able to "mind the gaps', and rework parts that were too confusing or too boring to get the attention I thought they deserved.
This is an intriguing off-label use case for Hypothes.is which is within the realm of peer-review use cases.
Dan is essentially using the idea of annotation as engagement within a textbook as a means of proactively improving it. He's mentioned it before in Hypothes.is Social (and Private) Annotation.
Because one can actively see the gaps without readers necessarily being aware of their "review", this may be a far better method than asking for active reviews of materials.
Reviewers are probably not as likely to actively mark sections they don't find engaging. Has anyone done research on this space for better improving texts? Certainly annotation provides a means for helping to do this.
-
-
journals.plos.org journals.plos.org
-
However, the degraded performance across all groups at 6 weeks suggests that continued engagement with memorised information is required for long-term retention of the information. Thus, students and instructors should exercise caution before employing any of the measured techniques in the hopes of obtaining a ‘silver bullet’ for quick acquisition and effortless recall of important data. Any system of memorization will likely require continued practice and revision in order to be effective.
Abysmally sad that this is presented without the context of any of the work over the last century and a half of spaced repetition.
I wonder that this point slipped past the reviewers and isn't at least discussed somewhat narratively here.
-
- Apr 2022
-
asapbio.org asapbio.org
-
Considering campaigns to post journal reviews on preprints. (n.d.). ASAPbio. Retrieved April 29, 2022, from https://asapbio.org/considering-campaigns-to-post-journal-reviews-on-preprints
-
- Mar 2022
-
www.the-scientist.com www.the-scientist.com
-
Mullins, M. (2021, November 1). Opinion: The Problem with Preprints. The Scientist Magazine®. https://www.the-scientist.com/critic-at-large/opinion-the-problem-with-preprints-69309
-
- Feb 2022
-
wblau.medium.com wblau.medium.com
-
Blau, W. (2022, February 14). Climate Change: Journalism’s Greatest Challenge. Medium. https://wblau.medium.com/climate-change-journalisms-greatest-challenge-2bb59bfb38b8
-
-
www.cbc.ca www.cbc.ca
-
News ·, A. M. · C. (2022, January 15). Canadian COVID-19 vaccine study seized on by anti-vaxxers—Highlighting dangers of early research in pandemic | CBC News. CBC. https://www.cbc.ca/news/health/covid-19-vaccine-study-omicron-anti-vaxxers-1.6315890
-
-
twitter.com twitter.com
-
Peter R. Hansen. (2022, February 3). Weighting, is the answer. The only study to find lockdowns ⬆️mortality is given weight 91.8% = 7390/8030, and then you get -0.2% to be the estimate. To summarize: -0.2% META-STUDY ESTIMATE is based on 91.8% ONE STUDY and 8.2% ALL OTHER STUDIES. https://t.co/j6e7ziPNAI [Tweet]. @ProfPHansen. https://twitter.com/ProfPHansen/status/1489366528956919808
-
-
twitter.com twitter.com
-
Kimberly Prather, Ph.D. (2022, January 11). This paper is not published..not reviewed...and has serious problems that will hopefully be fixed during the review process. The lead authors know this. See posts by me @linseymarr @jljcolorado . [Tweet]. @kprather88. https://twitter.com/kprather88/status/1481019341625724928
-
-
twitter.com twitter.com
-
AAI. (2022, January 29). More than seventy peer-reviewed #COVID-19, #SARS, and #MERS @J_Immunol articles are #FreeToRead http://ow.ly/lwTr50Hyu5F #immunology #ReadTheJI https://t.co/7Hi4g8ZySp [Tweet]. @ImmunologyAAI. https://twitter.com/ImmunologyAAI/status/1487425781647216646
-
- Dec 2021
-
www.nature.com www.nature.com
-
Replicating scientific results is tough—But essential. (2021). Nature, 600(7889), 359–360. https://doi.org/10.1038/d41586-021-03736-4
-
-
twitter.com twitter.com
-
AIMOS. (2021, November 30). How can we connect #metascience to established #science fields? Find out at this afternoon’s session at #aimos2021 Remco Heesen @fallonmody Felipe Romeo will discuss. Come join us. #OpenScience #OpenData #reproducibility https://t.co/dEW2MkGNpx [Tweet]. @aimos_inc. https://twitter.com/aimos_inc/status/1465485732206850054
-
- Nov 2021
-
pubmed.ncbi.nlm.nih.gov pubmed.ncbi.nlm.nih.gov
-
I have no problem with publishers making a profit, or with peer reviewers doing their work for free. The problem I have is when there is such an enormous gap between those two positions.
If publishers make billions in profit (and they do), while at the same time reviewers are doing a billion dollars worth of work for free, that seems like a broken system.
I think there are parallels with how users contribute value to social media companies. In both cases, users/reviewers are getting some value in return, but most of the value that's captured goes to the publisher/tech company.
I'd like to see a system where more of the value accrues to the reviewers. This could be in the form of direct payment, although this is probably less preferable because of the challenges of trying to convert the value of different kinds of peer review into a dollar amount.
Another problem with simply paying reviewers is that it retains the status quo; we keep the same system with all of it's faults and redistribute profits. This is an OK option as it at least sees some of the value that normally accrues to publishers moving to reviewers.
I also don’t believe that open access - in it’s current form - is a good option either. There are still enormous costs associated with publishing; the only difference is that those costs are now covered by institutions instead of the reader. The publisher still makes a heart-stopping profit.
A more elegant solution, although more challenging, would be for academics to step away from publishers altogether and start their own journals, on their own terms.
-
-
twitter.com twitter.com
-
COVID-19 Living Evidence. (2021, November 12). As of 12.11.2021, we have indexed 257,633 publications: 18,674 pre-prints 238,959 peer-reviewed publications Pre-prints: BioRxiv, MedRxiv Peer-reviewed: PubMed, EMBASE, PsycINFO https://t.co/ytOhLG90Pi [Tweet]. @evidencelive. https://twitter.com/evidencelive/status/1459163720450519042
-
- Oct 2021
-
www.reuters.com www.reuters.com
-
Reuters. (2021, October 6). Sweden, Denmark pause Moderna COVID-19 vaccine for younger age groups. Reuters. https://www.reuters.com/business/healthcare-pharmaceuticals/sweden-pauses-use-moderna-covid-vaccine-cites-rare-side-effects-2021-10-06/
Tags
- vaccination
- report
- lang:en
- risk
- rare
- is:news
- Sweden
- stop
- COVID-19
- Moderna
- younger
- health agency
- vaccine
- peer review
- side effect
- Denmark
Annotators
URL
-
-
graphics.reuters.com graphics.reuters.com
-
Sharma, M., Scarr, S., & Kell, K. (n.d.). Speed Science. Reuters. Retrieved August 19, 2021, from https://graphics.reuters.com/CHINA-HEALTH-RESEARCH/0100B5ES3MG/index.html
-
- Aug 2021
-
docmaps.knowledgefutures.org docmaps.knowledgefutures.org
-
coronacentral.ai coronacentral.ai
-
CoronaCentral. (n.d.). Retrieved 11 August 2021, from https://coronacentral.ai/
Tags
Annotators
URL
-
-
www.nature.com www.nature.com
-
McIntyre, L. (2021). Talking to science deniers and sceptics is not hopeless. Nature, 596(7871), 165–165. https://doi.org/10.1038/d41586-021-02152-y
-
- Jul 2021
-
www.reddit.com www.reddit.com
-
u/dawnlxh. (2021). Reviewing peer review: does the process need to change, and how?. r/BehSciAsk. Reddit
-
-
-
Antonoyiannakis, M. (2021). Does Publicity in the Science Press Drive Citations? ArXiv:2104.13939 [Physics]. http://arxiv.org/abs/2104.13939
-
-
twitter.com twitter.com
-
Health Nerd on Twitter. (2020). Twitter. Retrieved 26 February 2021, from https://twitter.com/GidMK/status/1327872397794439168
-
-
www.journals.uchicago.edu www.journals.uchicago.edu
-
Heesen, R., & Bright, L. K. (2020). Is Peer Review a Good Idea? The British Journal for the Philosophy of Science, 000–000. https://doi.org/10.1093/bjps/axz029
-
-
-
Tulleken, C. van. (2021). Covid-19: Sputnik vaccine rockets, thanks to Lancet boost. BMJ, 373, n1108. https://doi.org/10.1136/bmj.n1108
-
-
twitter.com twitter.com
-
Nicola Low #EveryDayCounts #StillFBPE on Twitter. (2020). Twitter. Retrieved 27 February 2021, from https://twitter.com/nicolamlow/status/1336958661151821825
-
-
twitter.com twitter.com
-
ReconfigBehSci on Twitter. (2020). Twitter. Retrieved 27 February 2021, from https://twitter.com/SciBeh/status/1339855911796543488
-
-
psyarxiv.com psyarxiv.com
-
Yesilada, M., Holford, D. L., Wulf, M., Hahn, U., Lewandowsky, S., Herzog, S., Radosevic, M., Stuchlý, E., Taylor, K., Ye, S., Saxena, G., & El-Halaby, G. (2021). Who, What, Where: Tracking the development of COVID-19 related PsyArXiv preprints. PsyArXiv. https://doi.org/10.31234/osf.io/evmgs
-
- Jun 2021
-
www.nature.com www.nature.com
-
Soderberg, C. K., Errington, T. M., Schiavone, S. R., Bottesini, J., Thorn, F. S., Vazire, S., Esterling, K. M., & Nosek, B. A. (2021). Initial evidence of research quality of registered reports compared with the standard publishing model. Nature Human Behaviour, 1–8. https://doi.org/10.1038/s41562-021-01142-4
-
-
www.mdpi.com www.mdpi.com
-
Sallam, M. (2021). COVID-19 Vaccine Hesitancy Worldwide: A Concise Systematic Review of Vaccine Acceptance Rates. Vaccines, 9(2), 160. https://doi.org/10.3390/vaccines9020160
-
-
iannotate.org iannotate.org
-
recently published book
I was honored to interview Remi and Antero (along with other MITP authors) about collaborative community review and how it fit with their traditional peer review experience. The blog post can be found here.
Tags
Annotators
URL
-
-
www.jclinepi.com www.jclinepi.com
-
Calster, B. V., Wynants, L., Riley, R. D., Smeden, M. van, & Collins, G. S. (2021). Methodology over metrics: Current scientific standards are a disservice to patients and society. Journal of Clinical Epidemiology, 0(0). https://doi.org/10.1016/j.jclinepi.2021.05.018
-
-
scholarlykitchen.sspnet.org scholarlykitchen.sspnet.org
-
Publisher costs usually include copyediting/formatting and organizing peer review. While these content transformations are fundamental and beneficial, they alone cannot justify the typical APC (Article Publication Charge), especially since peer reviewers are not paid.
But peer reviewers are largely responsible for generating the assertions you talk about in the next paragraph, and which apparently, justify the cost of publishing.
-
- May 2021
-
-
Trovò, B., & Massari, N. (2021). Ants-Review: A Protocol for Incentivized Open Peer-Reviews on Ethereum. ArXiv:2101.09378 [Cs]. http://arxiv.org/abs/2101.09378
-
- Apr 2021
-
psyarxiv.com psyarxiv.com
-
Pleskac, T. J., Kyung, E., Chapman, G. B., & Urminsky, O. (2021, April 23). Single- or double-blind review? A field study of system preference, reliability, bias, and validity. https://doi.org/10.31234/osf.io/q2tkw
-
-
-
Die weitestgehende Öffnung liegt bei dieser Variante vor, wenn sowohl Autor*innen- wie auch Gutachter*innen- und Gutachtentransparenz besteht. Offene Review-Verfahren schließen ferner die Option einer nachträglichen Veröffentlichung der Gutachten als Begleittexte einer Publikation mit ein
Volle Transparenz wäre m.E. erst dann gegeben, wenn auch abgelehente Einreichungen mitsamt der der Gutachten, die zur Ablehnung geführt haben ins Netz gestellt werden. Mir scheint, um Meinungs- oder Zitationskartelle zu verhindern (oder zumindest offensichtlich werden zu lassen), wäre das sogar wichtiger als die Namen der Gutachter anzugeben.
Tags
Annotators
URL
-
- Mar 2021
-
www.cam.ac.uk www.cam.ac.uk
-
Machine learning models for diagnosing COVID-19 are not yet suitable for clinical use. (2021, March 15). University of Cambridge. https://www.cam.ac.uk/research/news/machine-learning-models-for-diagnosing-covid-19-are-not-yet-suitable-for-clinical-use
-
-
-
Mambrini, A., Baronchelli, A., Starnini, M., Marinazzo, D., & De Domenico, M. (2020). PRINCIPIA: A Decentralized Peer-Review Ecosystem. ArXiv:2008.09011 [Nlin, Physics:Physics]. http://arxiv.org/abs/2008.09011
-
-
deevybee.blogspot.com deevybee.blogspot.com
-
Deevybee. (2020, December 6). BishopBlog: Faux peer-reviewed journals: a threat to research integrity. BishopBlog. http://deevybee.blogspot.com/2020/12/faux-peer-reviewed-journals-threat-to.html
-
-
twitter.com twitter.com
-
ReconfigBehSci. (2020, November 9). final speaker in our ‘Open science and crisis knowledge management’ session: Michele Starnini on radically redesigning the peer review system #scibeh2020 https://t.co/Gsr66BRGcJ [Tweet]. @SciBeh. https://twitter.com/SciBeh/status/1325734449783443461
-
-
www.collabovid.org www.collabovid.org
-
Collabovid. (n.d.). Retrieved 6 March 2021, from https://www.collabovid.org/
-
-
www.principia.network www.principia.network
-
PRINCIPIA - Decentralized peer review. (n.d.). Retrieved 5 March 2021, from http://www.principia.network/
-
-
aimos.community aimos.community
-
Conference Details. (n.d.). AIMOS. Retrieved 5 March 2021, from https://aimos.community/2020-details
-
-
redteammarket.com redteammarket.com
-
Market, R. T. (n.d.). Build trust through criticism. Red Team Market. Retrieved 4 March 2021, from https://redteammarket.com/
-
-
twitter.com twitter.com
-
ReconfigBehSci on Twitter. (n.d.). Twitter. Retrieved 1 March 2021, from https://twitter.com/SciBeh/status/1354456391772229632
-
- Feb 2021
-
twitter.com twitter.com
-
Dr Elaine Toomey on Twitter. (n.d.). Twitter. Retrieved 24 February 2021, from https://twitter.com/ElaineToomey1/status/1357343820417933316
-
-
www.stm-assoc.org www.stm-assoc.org
-
The Rights Retention Strategy provides a challenge to the vital income that is necessary to fund the resources, time, and effort to provide not only the many checks, corrections, and editorial inputs required but also the management and support of a rigorous peer review process
This is an untested statement and does not take into account the perspectives of those contributing to the publishers' revenue. The Rights Retention Strategy (RRS) relies on the author's accepted manuscript (AAM) and for an AAM to exist and to have the added value from peer-review a Version of Record (VoR) must exist. Libraries recognise this fundamental principle and continue to subscribe to individual journals of merit and support lucrative deals with publishers. From some (not all) librarians' and possibly funders' perspectives these statements could undermine any mutual respect.
-
- Jan 2021
-
twitter.com twitter.com
-
ReconfigBehSci [@SciBeh] (2020-01-27) new post on Scibeh's meta-science reddit describing the new rubric for peer review of preprints aimed at broadening the pool of potential 'reviewers' so that students could provide evaluations as well! https://reddit.com/r/BehSciMeta/comments/l64y1l/reviewing_peer_review_does_the_process_need_to/ please take a look and provide feedback! Twitter. Retrieved from: https://twitter.com/SciBeh/status/1354456393877749763
-
-
arxiv.org arxiv.org
-
Mambrini. A. Baronchelli. A. Starnini. M. Marinazzo. D. De Domenico, M. (2020) .PRINCIPIA: a Decentralized Peer-Review Ecosystem. Retrieved from: chrome-extension://bjfhmglciegochdpefhhlphglcehbmek/pdfjs/web/viewer.html?file=https%3A%2F%2Farxiv.org%2Fpdf%2F2008.09011.pdf
-
- Nov 2020
-
-
Soderberg, C. K., Errington, T., Schiavone, S. R., Bottesini, J. G., Thorn, F. S., Vazire, S., Esterling, K. M., & Nosek, B. A. (2020). Research Quality of Registered Reports Compared to the Traditional Publishing Model. MetaArXiv. https://doi.org/10.31222/osf.io/7x9vy
-
- Oct 2020
-
twitter.com twitter.com
-
Health Nerd on Twitter. (n.d.). Twitter. Retrieved October 17, 2020, from https://twitter.com/GidMK/status/1316511734115385344
-
- Sep 2020
-
twitter.com twitter.com
-
Daniël Lakens on Twitter. (n.d.). Twitter. Retrieved September 23, 2020, from https://twitter.com/lakens/status/1308115862247952386
-
-
twitter.com twitter.com
-
Max Primbs on Twitter. (n.d.). Twitter. Retrieved September 14, 2020, from https://twitter.com/MaxPrimbs/status/1304516869509066760
-
-
outbreaksci.prereview.org outbreaksci.prereview.org
-
Outbreak Science Rapid PREreview • Dashboard. (n.d.). Retrieved September 11, 2020, from https://outbreaksci.prereview.org/dashboard?q=COVID-19&q=Coronavirus&q=SARS-CoV-2
-
-
rapidreviewscovid19.mitpress.mit.edu rapidreviewscovid19.mitpress.mit.edu
-
Rapid Reviews COVID-19. (n.d.). Rapid Reviews COVID-19. Retrieved September 11, 2020, from https://rapidreviewscovid19.mitpress.mit.edu/
-
-
retractionwatch.com retractionwatch.com
-
Marcus, A. A. (2020, September 8). COVID-19 arrived on a meteorite, claims Elsevier book chapter. Retraction Watch. https://retractionwatch.com/2020/09/08/covid-19-arrived-on-a-meteorite-claims-elsevier-book-chapter/
-
-
www.reddit.com www.reddit.com
-
r/BehSciMeta - Comment by u/dawnlxh on ”A completely re-imagined approach to peer review and publishing: PRINCIPIA”. (n.d.). Reddit. Retrieved September 10, 2020, from https://www.reddit.com/r/BehSciMeta/comments/if03sk/a_completely_reimagined_approach_to_peer_review/g4nnuc5
-
-
www.reddit.com www.reddit.com
-
r/BehSciMeta—No appeasement of bad faith actors. (n.d.). Reddit. Retrieved June 2, 2020, from https://www.reddit.com/r/BehSciMeta/comments/gv0y99/no_appeasement_of_bad_faith_actors/
-
-
www.nature.com www.nature.com
-
Clements, J. C. (2020). Don’t be a prig in peer review. Nature. https://doi.org/10.1038/d41586-020-02512-0
-
- Aug 2020
-
openreview.net openreview.net
-
About | OpenReview. (n.d.). Retrieved May 30, 2020, from https://openreview.net/about
-
-
sci-hub.tw sci-hub.tw
-
Schalkwyk, M. C. I. van, Hird, T. R., Maani, N., Petticrew, M., & Gilmore, A. B. (2020). The perils of preprints. BMJ, 370. https://doi.org/10.1136/bmj.m3111. https://t.co/qNPLYCeT99?amp=1
-
-
www.biorxiv.org www.biorxiv.org
-
Besançon, L., Peiffer-Smadja, N., Segalas, C., Jiang, H., Masuzzo, P., Smout, C., Deforet, M., & Leyrat, C. (2020). Open Science Saves Lives: Lessons from the COVID-19 Pandemic. BioRxiv, 2020.08.13.249847. https://doi.org/10.1101/2020.08.13.249847
-
-
twitter.com twitter.com
-
Michael Eisen on Twitter: “A core problem in science publishing today is that we have a system where the complex, multidimensional assessment of the rigor, validity, utility, audience and impact of a work that emerges from peer review gets reduced to a single overvalued ‘accept/reject’ decision.” / Twitter. (n.d.). Twitter. Retrieved August 10, 2020, from https://twitter.com/mbeisen/status/1291752487448276992
-
-
twitter.com twitter.com
-
Esther Choo, MD MPH on Twitter: “Question for Twitter. Why didn’t academia take the lead on Covid information? Why didn’t schools of med & public health across the US band together, put forth their experienced scientists in epidemiology, virology, emergency & critical care, pandemic and disaster response...” / Twitter. (n.d.). Twitter. Retrieved August 10, 2020, from https://twitter.com/choo_ek/status/1291789978716868608
-
-
ropensci.org ropensci.org
-
‘OSF: A Project Management Service Built for Research - ROpenSci - Open Tools for Open Science’. Accessed 10 August 2020. https://ropensci.org/blog/2020/08/04/osf/.
-
-
www.fastcompany.com www.fastcompany.com
-
Taraborelli, D., Taraborelli, D., & Taraborelli, D. (2020, August 5). How the COVID-19 crisis has prompted a revolution in scientific publishing. Fast Company. https://www.fastcompany.com/90537072/how-the-covid-19-crisis-has-prompted-a-revolution-in-scientific-publishing
-
-
-
Hoekstra, R., & Vazire, S. (2020, July 29). Hoekstra & Vazire (2020), Intellectual humility is central to science. https://doi.org/10.31234/osf.io/edh2s
-
- Jul 2020
-
www.nytimes.com www.nytimes.com
-
Eisen, M. B., & Tibshirani, R. (2020, July 20). Opinion | How to Identify Flawed Research Before It Becomes Dangerous. The New York Times. https://www.nytimes.com/2020/07/20/opinion/coronavirus-preprints.html
-
-
twitter.com twitter.com
-
Dan Quintana on Twitter: “Tomorrow at 1pm CEST I’ll be doing a virtual talk for the Rotterdam R.I.O.T. Science Club (@rdam_riots) on using Twitter for science 🧬 I’ll be covering both the why and the how + I’ll be leaving plenty of time for a Q&A session. Watch here: https://t.co/nXHry9Inyi https://t.co/T6u7lvgAhO” / Twitter. (n.d.). Twitter. Retrieved June 17, 2020, from https://twitter.com/dsquintana/status/1264623289814659072
-
-
www.youtube.com www.youtube.com
-
COVID-19, preprints, and the information ecosystem. (n.d.). Retrieved June 17, 2020, from https://www.youtube.com/watch?v=yWi4Q5rZiO0
-
-
medium.com medium.com
-
Brock. J. (2020). Rapid Registered Reports initiative aims to stop coronavirus researchers following false leads. Nature index
-
-
featuredcontent.psychonomic.org featuredcontent.psychonomic.org
-
Yesilada. M. (2020). From peer review to “science without the drag” via PsyArXiv. Psychonomic Society.
-
-
www.sciencedirect.com www.sciencedirect.com
-
Clark, J., Glasziou, P., Mar, C. D., Bannach-Brown, A., Stehlik, P., & Scott, A. M. (2020). A full systematic review was completed in 2 weeks using automation tools: A case study. Journal of Clinical Epidemiology, 121, 81–90. https://doi.org/10.1016/j.jclinepi.2020.01.008
-
-
statmodeling.stat.columbia.edu statmodeling.stat.columbia.edu
-
Andrew (2020, June 11) bla bla bla PEER REVIEW bla bla bla.. Retrieved from https://statmodeling.stat.columbia.edu/2020/06/11/bla-bla-bla-peer-review-bla-bla-bla/
-
-
-
Which of these best practices is your team already doing regularly?
Peer Review Practices
-
-
smartbear.com smartbear.com
-
Authors should annotate code before the review occurs because annotations guide the reviewer through the changes
Guide the reviewer during the review process
-
It´s also useful to watch internal process metrics, including:
Inspection rate Defect rate Defect density
-
Before implementing a process, your team should decide how you will measure the effectiveness of peer review and name a few tangible goals.
Set few tangible goals. Fix more bugs is not a good example.
-
Code reviews in reasonable quantity, at a slower pace for a limited amount of time results in the most effective code review.
Only less than 500 LOC per hour
-
The brain can only effectively process so much information at a time; beyond 400 LOC, the ability to find defects diminishes.
<400 LOC
-
-
www.youtube.com www.youtube.comYouTube1
-
Dr Daniel Quintana | Using Twitter for Science | R.I.O.T. Science Club—YouTube. (2020, May 26). https://www.youtube.com/watch?v=pA5Y4cO934I
-
- Jun 2020
-
www.the100.ci www.the100.ci
-
Mis-allocated scrutiny. (2020, June 24). The 100% CI. http://www.the100.ci/2020/06/24/mis-allocated-scrutiny/
-
-
doi.org doi.org
-
Horbach, S. P. J. M. (2020). Pandemic Publishing: Medical journals drastically speed up their publication process for Covid-19. BioRxiv, 2020.04.18.045963. https://doi.org/10.1101/2020.04.18.045963
-
-
-
Peer review should be an honest, but collegial, conversation. (2020). Nature, 582(7812), 314–314. https://doi.org/10.1038/d41586-020-01622-z
-
-
oaspa.org oaspa.org
-
Scholarly publishers are working together to maximize efficiency during COVID-19 pandemic. (2020, April 27). OASPA. https://oaspa.org/scholarly-publishers-working-together-during-covid-19-pandemic/
-
-
featuredcontent.psychonomic.org featuredcontent.psychonomic.org
-
Lindsay, D. S. (2020, May 29). Enhancing Peer Review of Scientific Reports. Psychonomic Society Featured Content. https://featuredcontent.psychonomic.org/enhancing-peer-review-of-scientific-reports/
-
-
featuredcontent.psychonomic.org featuredcontent.psychonomic.org
-
Holcombe, A. (2020, May 25). As new venues for peer review flower, will journals catch up? Psychonomic Society Featured Content. https://featuredcontent.psychonomic.org/as-new-venues-for-peer-review-flower-will-journals-catch-up/
-
-
www.cnbc.com www.cnbc.com
-
Farr, C. (2020, May 23). Why scientists are changing their minds and disagreeing during the coronavirus pandemic. CNBC. https://www.cnbc.com/2020/05/23/why-scientists-change-their-mind-and-disagree.html
-
-
-
Knöchelmann, M. (2020, February 25) Open Humanities: Why Open Science in the Humanities is not Enough. Impact of Social Sciences. https://blogs.lse.ac.uk/impactofsocialsciences/2020/02/25/open-humanities-why-open-science-in-the-humanities-is-not-enough/
Tags
- research
- technology
- scholarship
- science
- peer review
- open humanities
- is:blog
- social challenge
- unity
- cooperation
- lang:en
- open science
Annotators
URL
-
-
www.nature.com www.nature.com
-
Science in the time of COVID-19. Nat Hum Behav 4, 327–328 (2020). https://doi.org/10.1038/s41562-020-0879-9
-
-
twitter.com twitter.com
-
richard horton on Twitter: “David Spiegelhalter said this morning, ‘Peer review has just disappeared from scientific analysis.’ This is complete and utter nonsense. Our editors across 19 Lancet journals do nothing else but peer review. We intensively review all COVID-19 research papers. You know this David.” / Twitter. (n.d.). Twitter. Retrieved June 5, 2020, from https://twitter.com/richardhorton1/status/1263020292932358145
-
-
-
Heathers, J. (2020, May 21). Preprints Aren’t The Problem—WE Are The Problem. Medium. https://medium.com/@jamesheathers/preprints-arent-the-problem-we-are-the-problem-75d29a317625
-
-
twitter.com twitter.com
-
Daniël Lakens on Twitter
-
-
featuredcontent.psychonomic.org featuredcontent.psychonomic.org
-
Lewandowsky, S. (2020, June 1). A tale of two island nations: Lessons for crisis knowledge management. Psychonomic Society Featured Content. https://featuredcontent.psychonomic.org/a-tale-of-two-island-nations-lessons-for-crisis-knowledge-management/
-
- May 2020
-
www.nytimes.com www.nytimes.com
-
Bajak, A., & Howe, J. (2020, May 14). Opinion | A Study Said Covid Wasn’t That Deadly. The Right Seized It. The New York Times. https://www.nytimes.com/2020/05/14/opinion/coronavirus-research-misinformation.html
-
-
psyarxiv.com psyarxiv.com
-
Ikeda, K., Yamada, Y., & Takahashi, K. (2020). Post-Publication Peer Review for Real [Preprint]. PsyArXiv. https://doi.org/10.31234/osf.io/sp3j5
-
-
twitter.com twitter.com
-
Carl T. Bergstrom on Twitter
-
-
twitter.com twitter.com
-
John Burn-Murdoch on Twitter
-
-
phylogenomics.blogspot.com phylogenomics.blogspot.com
-
The Tree of Life: Stop deifying “peer review” of journal publications: (2012, February 4). The Tree of Life. https://phylogenomics.blogspot.com/2012/02/stop-deifying-peer-review-of-journal.html
-
-
psyarxiv.com psyarxiv.com
-
Orben, A., Tomova, L., & Blakemore, S.-J. (2020). The effects of social deprivation on adolescent social development and mental health [Preprint]. PsyArXiv. https://doi.org/10.31234/osf.io/7afmd
-
-
www.nber.org www.nber.org
-
Hadavand, A., Hamermesh, D.S., & Wilson, W.W. (2020). Is scholarly refereeing productive (at the margin)? The National Bureau of Economic Research. https://www.nber.org/papers/w26614
-
-
theconversation.com theconversation.com
-
Munafo, M. (n.d.). What you need to know about how coronavirus is changing science. The Conversation. Retrieved May 6, 2020, from http://theconversation.com/what-you-need-to-know-about-how-coronavirus-is-changing-science-137641
-
-
neurochambers.blogspot.com neurochambers.blogspot.com
-
Chambers, C. (2020 March 16). CALLING ALL SCIENTISTS: Rapid evaluation of COVID19-related Registered Reports at Royal Society Open Science
10 Updates*
-
- Apr 2020
-
psyarxiv.com psyarxiv.com
-
Jamieson, R. K., & Pexman, P. M. (2020, April 20). Moving Beyond 20 Questions: We (Still) Need Stronger Psychological Theory. https://doi.org/10.1037/cap0000223
-
-
-
Call for Papers: Commentaries on the Coronavirus Pandemic Deadline: April 30, 2020
-
-
www.vice.com www.vice.com
-
Koebler, J. (2020 April 09). The viral 'study' about runners spreading coronavirus is not actually a study. Vice. https://www.vice.com/en_us/article/v74az9/the-viral-study-about-runners-spreading-coronavirus-is-not-actually-a-study
-
-
docs.google.com docs.google.com
-
Google Doc. COVID-19 Preprint Tracker
-
-
arxiv.org arxiv.org
-
Kerzendorf, W. E., Patat, F., Bordelon, D., van de Ven, G., & Pritchard, T. A. (2020). Distributed peer review enhanced with natural language processing and machine learning. Nature Astronomy. https://doi.org/10.1038/s41550-020-1038-y
-
-
medium.com medium.com
-
Heathers, J. (2020 April 13). Hurry, don't rush. Medium. https://medium.com/@jamesheathers/hurry-dont-rush-e1aee626e733
-
-
-
New CEPR publication: Covid Economics, Vetted and Real-Time Papers | Centre for Economic Policy Research. (n.d.). Retrieved April 17, 2020, from https://cepr.org/content/new-cepr-publication-covid-economics-vetted-and-real-time-papers
-
-
www.natureindex.com www.natureindex.com
-
Rapid Registered Reports initiative aims to stop coronavirus researchers following false leads. (n.d.). Retrieved April 20, 2020, from https://www.natureindex.com/news-blog/rapid-registered-report-coronavirus-aims-to-stop-researchers-following-false-research-leads
-
-
www.mdpi.com www.mdpi.com
-
There are good preprints and bad preprints, just like there are with journal articles. Overall, do not be afraid to be scooped or plagiarized! Preprints also actually protect against scooping [21,22]. Preprints establish the priority of discovery as a formally published item. Therefore, a preprint acts as proof of provenance for research ideas, data, code, models, and results—all outputs and discoveries.
Salah satu alasan untuk tidak mengunggah preprint adalah takut idenya dicuri,
Ini adalah faktor budaya yang lain. Ketakutan yang tidak beralasan. Justru dengan mengunggah preprint, peneliti dapat mengklaim ide lebih awal.
Preprint ada yang bagus dan ada yang buruk, peninjauan akan ada di tangan pembaca. Ini adalah hambatan budaya berikutnya, ketika mayoritas pembaca ingin melimpahkan tanggungjawab untuk memverifikasi, memeriksa, dan menjamin kualitas suatu makalah kepada para peninjau.
Pengalihan tanggungjawab ini sulit dilakukan ketika dokumen PR sendiri tertutup, dan tidak lepas dari bias.
Selain itu, dosen akan menyalahi prinsip yang disebarluaskan kepada para mahasiswa, untuk membaca secara kritis.
-
One of the reasons is the delay in the peer-review process and the subsequent publication
Salah satu kritik terbesar terhadap preprint adalah ketidakadaan peninjauan sejawat (Peer Review/PR).
Proses PR ini memang menjadi proses sentral dalam publikasi. Di luar manfaatnya, PR juga dapat merugikan, karena memberikan hambatan waktu.
Yang unik ada makalah yang memperlihatkan hasil bahwa banyak makalah versi terpublikasi memiliki isi dan tampilan tidak berbeda dengan versi preprintnya.
-
-
eartharxiv.org eartharxiv.org
-
Someresearch has shown that preprints tend to be of similar quality to their final published versions in journals
Salah satu kritik terbesar terhadap preprint adalah ketidakadaan peninjauan sejawat.
Proses PR ini memang menjadi proses sentral dalam publikasi. Di luar manfaatnya, PR juga dapat merugikan, karena memberikan hambatan waktu.
Yang unik ada makalah yang memperlihatkan hasil bahwa banyak makalah versi terpublikasi memiliki isi dan tampilan tidak berbeda dengan versi preprintnya.
-
- Feb 2020
-
riojournal.com riojournal.com
-
Keywords
I would include "open data" and "data sharing" as keywords too.
Tags
Annotators
URL
-
-
web.hypothes.is web.hypothes.is
-
Transparent Review in Preprints (TRiP) — that enables journals and peer review services to post peer reviews of submitted manuscripts on CSHL’s preprint server bioRxiv.
Incredible use of annotation technology in peer review over preprints! Watch this space! I'm lucky that I get to use annotation in my work at the Knowledge Futures Group.
-
- Jan 2020
-
web.hypothes.is web.hypothes.is
-
Interested authors can select In Review when they submit their manuscript through Editorial Manager. Participating will enable them to track the progress of their manuscript through peer review with immediate access to review reports, share their work to engage a wider community through open annotation using Hypothesis, follow a transparent editorial checklist, and gain early collaboration and citation opportunities.
Annotation in peer review, whether on preprints or through a more traditional manuscript submission system, offers the option for reviewers, editors, and authors to give and received feedback in context. And I'm super excited about this new project.
-
- Dec 2019
-
academic.oup.com academic.oup.com
-
Supplementary data
Of special interest is that a reviewer openly discussed in blog his general thoughts about the state of the art in the field based on what he had been looking at in the paper. This blog came out just after he completed his 1st round review, and before an editorial decision was made.
http://ivory.idyll.org/blog/thoughts-on-assemblathon-2.html
This spawned additional blogs that broadened the discussion among the community-- again looking toward the future.<br> See: https://www.homolog.us/blogs/genome/2013/02/23/titus-browns-thoughts-on-the-assemblathon-2-paper/
And
Further the authors, now in the process of revising their manuscript, joined in on twitter, reaching out to the community at large for suggestions on revisions, and additional thoughts. Their paper had been posted in arxiv- allowing for this type of commenting and author/reader interaction See: https://arxiv.org/abs/1301.5406
The Assemblathon.org site collected and presented all the information on the discussion surrounding this article. https://assemblathon.org/page/2
A blog by the editors followed all this describing this ultra-open peer review, highlighting how these forms of discussions during the peer review process ended up being a very forward-looking discussion about the state of based on what the reviewers were seeing in this paper, and the directions the community should now focus on. This broader open discussion and its very positive nature could only happen in an open, transparent, review process. See: https://blogs.biomedcentral.com/bmcblog/2013/07/23/ultra-open-peer-review/
-
- Oct 2019
-
riojournal.com riojournal.com
-
A Million Brains in the Cloud
Arno Klein and Satrajit S. Gosh published this research idea in 2016 and opened it to review. In fact, you could review their abstract directly in RIO, but for the MOOC activity "open peer review" we want you to read and annotate their proposal using this Hypothes.is layer. You can add annotations by simply highlighting a section that you want to comment on or add a page note and say in a few sentences what you think of their ideas. You can also reply to comments that your peers have already made. Please sign up to Hypothes.is and join the conversation!
Tags
Annotators
URL
-
- Sep 2019
-
-
Transparent Review in Preprints will allow journals and peer review services to show peer reviews next to the version of the manuscript that was submitted and reviewed.
A subtle but important point here is that when the manuscript is a preprint then there are two public-facing documents that are being tied together-- the "published" article and the preprint. The review-as-annotation becomes the cross-member in that document association.
-
-
psyarxiv.com psyarxiv.com
-
I am writing this review for the Drummond and Sauer comment on Mathur and VanderWeele (2019). To note, I am familiar with the original meta-analyses considered (one of which I wrote), the Mathur and VanderWeele (henceforth MV2019) article, and I’ve read both Drummond and Sauer’s comment on MV2019 and Mathur’s review of Drummond and Sauer’s comment on MV2019 (hopefully that wasn’t confusing). On balance, I think Drummond and Sauer’s (henceforth DSComment) comment under review here is a very important contribution to this debate. I tended to find DSComment to be convincing and was comparatively less convinced by Mathur’s review or, indeed, MV2019. I hope my thoughts below are constructive.
It’s worth noting that MV2019 suffered from several primary weaknesses. Namely:
- On one hand, it didn’t really tell us anything we didn’t already know, namely that near-zero effect sizes are common for meta-analyses in violent video game research.
- MV2019, aside from one brief statement as DSComment notes, neglected the well-known methodological issues that tend to spuriously increase effect sizes (unstandardized aggression measures, self-ratings of violent game content, identified QRPs in some studies such as the Singapore dataset, etc.) This resulted in a misuse of meta-analytic procedures.
- MV2019 naïvely interprets (as does Mathur’s review of DSComment) near-zero effect sizes as meaningful, despite numerous reasons not to do so given concerns of false positives.
- MV2019, for an ostensible compilation of meta-analyses, curiously neglect other meta-analyses, such as those by John Sherry or Furuyama-Kanamori & Doi (2016).
At this juncture, publication bias, particularly for experimental studies, has been demonstrated pretty clearly (e.g. Hilgard et al., 2017). I have two comments here. MV2019 offered a novel and not well-tested alternative approach (highlighted again by Mathur’s review) for bias, however, I did not find the arguments convincing as this approach appears extrapolative and produces results that simply aren’t true. For instance, the argument that 100% of effect sizes in Anderson 2010 are above 0, is quickly falsified merely by looking at the reported effect sizes in the studies included, at least some of which are below .00. Therefore, this would appear to clearly indicate some error in the procedure of MV2019.
Further, we don't need statistics to speculate about publication bias in Anderson et al. (2010) as there are actual specific examples of published null studies missed by Anderson et al. (see Ferguson & Kilburn, 2010). Further, the publication of null studies in the years immediately following (e.g. von Salisch et al., 2011) indicate that Anderson's search for unpublished studies was clearly biased (indeed, I had unpublished data at that time but was not asked by Anderson and colleagues for it). So there's no need at all for speculation given we have actual examples of missed studies and a fair number of them.
It might help to highlight also that traditional publication bias techniques probably are only effective with small sample experimental studies. For large sample correlational/longitudinal studies, effect sizes tend to be a bit more homogeneous, hovering closely to zero. In such studies the accumulation of p-values near .05 is unlikely given the power of small studies. Relatively simple QRPs can make p-values jump rapidly from non-significance to something well below.05. Thus, traditional publication bias procedures may return null results for this pool of studies, despite QRPs, and thus, publication bias having taken place.
It might also help to note that meta-analyses with weak effects are very fragile to unreported null studies, which probably exist in greater numbers (particularly for large n studies) that would be indicated by publication bias techniques.
I agree with Mathur’s comment about experiments not always offering the best evidence, given lack of generalizability to real-world aggression (indeed, that’s been a long-standing concern). However, it might help DSComment to note that, by this point, probably the pool of evidence least likely to find effects are longitudinal studies. I’ve got two preregistered longitudinal analyses of existing datasets myself (here I want to make clear that citing my work is by no means necessary for my positive evaluation of any revisions on DSComment), and there are other fine studies (such as Lobel et al., 2017, Breuer et al., 2015, Kuhn et al., 2018; von Salisch et al., 2011, etc.) The authors may also want to note Przybylski and Weinstein (2019) which offer an excellent example of a preregistered correlational study.
Indeed, in a larger sense, as far as evidence goes, DSComment could highlight recent preregistered evidence from multiple sources (McCarthy et al., 2016; Hilgard et al., 2019, Przybylski & Weinstein, 2019, Ferguson & Wang, 2019, etc.) This would seem to be the most crucial evidence and, aside from one excellent correlational study (Ivory et al.) all of the preregistered results have been null. Even if we think the tiny effect sizes in existing metas provide evidence in support of hypotheses (and we shouldn’t), these preregistered studies suggest we shouldn’t trust even those tiny effects to be “true.”
The weakest aspect of MV2019 was the decision to interpret near-zero effects as meaningful. Mathur, argues that tiny effects can be important once spread over a population. However, this is merely speculation, and there’s no data to support it. It’s kind of a truthy thing scholars tend to say defensively when confronted by the possibility that effect sizes don’t support their hypotheses. By making this argument, Mathur invites an examination of population data where convincing evidence (Markey, Markey & French, 2015; Cunningham et al., 2016; Beerthuizen, Weijters & van der Laan, 2017) shows that violent game consumption is associated with reduced violence in society. Granted, some may express caution about looking at societal-level data, but here is where scholars can’t have it both ways: One can’t make claims about societal-level effects, and then not want to look at the societal data. Such arguments make unfalsifiable claims and are unscientific in nature.
The other issue is that this line of argument makes effect sizes irrelevant. If we’re going to interpret effect sizes no matter how near to zero as hypothesis supportive, so long as they are “statistically significant” (which, given the power of meta-analyses, they almost always are), then we needn’t bother reporting effect sizes at all. We’re still basically slaves to NHST, just using effect sizes as a kind of fig leaf for the naked bias of how we interpret weak results.
Also, that’s just not how effect sizes work. They can’t be sprinkled like pixie dust over a population to make them meaningful.
As DSComment points out, effect sizes that are this small have high potential for Type 1 error. Funder and Ozer (2019) recent contributed to this discussion in a way I think was less than helpful (to be very clear I respect Funder and Ozer greatly, but disagree with many of their comments on this specific issue). Yet, as they note, interpretation of tiny effects is based on such effects being “reliable”, a condition clearly not in evidence for violent game research given the now extensive literature on the systematic methodological flaws in that literature.
In her comment Dr. Mathur dismisses the comparison with ESP research, but I disagree with (or dismiss?) this dismissal. The fact that effect sizes in meta-analyses for violent game research are identical to those for “magic” is exactly why we should be wary of interpreting such effect sizes as hypothesis supportive. Saying violent game effects are more plausible is irrelevant (and presumably the ESP people would disagree). However, the authors of DSComment might strengthen their argument by noting that some articles have begun examining nonsense outcomes within datasets. For example, in Ferguson and Wang (2019) we show that the (weak and in that case non-significant) effects for violent game playing are no different in predicting aggression than nonsense variables (indeed, the strongest effect was for the age at which one had moved to a new city). Orben and Przybylski (2019) do something similar and very effective with screen time. Point being, we have an expanding literature to suggest that the interpretation of such weak effects is likely to lead us to numerous false positive errors.
The authors of DSComment might also note that MV2019 commit a fundamental error of meta-analysis, namely assuming that the “average effect size wins!” When effect sizes are heterogeneous (as Mathur appears to acknowledge unless I misunderstood) the pooled average effect size is not a meaningful estimator of the population effect size. That’s particularly true given GIGO (garbage in, garbage out). Where QRPs have been clearly demonstrated for some studies in this realm (see Przybylski & Weinstein, 2019 for some specific examples of documentation involving the Singapore dataset), the pooled average effect size, however it is calculated, is almost certainly a spuriously high estimate of true effects.
DSComment could note that other issues such as citation bias are known to be associated with spuriously high effect sizes (Ferguson, 2015), another indication that researcher behaviors are likely pulling effect sizes above the actual population effect size.
Overall, I don’t think MV2019 were very familiar with this field and, appearing unaware of the serious methodological errors endemic in much of the literature which pull effect sizes spuriously high. In the end, they really didn’t say anything we didn’t already know (the effect sizes across metas tend to be near zero), and their interpretation of these near-zero effect sizes was incorrect.
With that in mind, I do think DSComment is an important part of this debate and is well worth publishing. I hope my comments here are constructive.
Signed, Chris Ferguson
-
[This was a peer review for the journal "Meta-Psychology", and I am posting it via hypothes.is at the journal's suggestion.]
I thank the authors for their response to our article. For full disclosure, I previously reviewed an earlier version of this manuscript. The present version of the manuscript shows improvement, but does not yet address several of my substantial concerns, each of which I believe should be thoroughly addressed if a revision is invited. My concerns are as follows:
1.) The publication bias corrections still rely on incorrect statistical reasoning, and using more appropriate methods yields quite different conclusions.
Regarding publication bias, the first analysis of the number of expected versus observed p-values between 0.01 and 0.05 that is presented on page 3 (i.e., “Thirty nine…should be approximately 4%”) cannot be interpreted as a test of publication bias, as described in my previous review. The p-values would only be uniformly distributed if the null were true for every study in the meta-analysis. If the null does not hold for every study in the meta-analysis, then we would of course expect more than 4% of the p-values to fall in [0.01, 0.05], even in the absence of any publication bias. I appreciate that the authors have attempted to address this by additionally assessing the excess of marginal p-values under two non-null distributions. However, these analyses are still not statistically valid in this context ; they assume that every study in the meta-analysis has exactly the same effect size (i.e., that there is no heterogeneity), which is clearly not the case in the present meta-analyses. Effect heterogeneity can substantially affect the distribution and skewness of p-values in a meta-analysis (see Valen & Yuan, 2007). To clarify the second footnote on page 3, I did not suggest this particular analysis in my previous review, but rather described why the analysis assuming uniformly distributed p-values does not serve as a test of publication bias.
I would instead suggest conducting publication bias corrections using methods that accommodate heterogeneity and allow for a realistic distribution of effects across studies. We did so in the Supplement of our PPS piece (https://journals.sagepub.com/doi/suppl/10.1177/1745691619850104) using a maximum-likelihood selection model that accommodates normally-distributed, heterogeneous true effects and essentially models a discontinuous “jump” in the probability of publication at the alpha threshold of 0.05. These analyses did somewhat attenuate the meta-analyses’ pooled point estimates, but suggested similar conclusions to those presented in our main text. For example, the Anderson (2010) meta-analysis had a corrected point estimate among all studies of 0.14 [95% CI: 0.11, 0.16]. The discrepancy between our findings and Drummond & Sauer’s arises partly because the latter analysis focuses only on pooled point estimates arising from bias correction, not on the heterogeneous effect distribution, which is the very approach that we described as having led to the apparent “conflict” between the meta-analyses in the first place. Indeed, as we described in the Supplement, publication bias correction for the Anderson meta-analyses still yields an estimated 100%, 76%, and 10% of effect sizes above 0, 0.10, and 0.20 respectively. Again, this is because there is substantial heterogeneity. If a revision is invited, I would (still) want the present authors to carefully consider the issue of heterogeneity and its impact on scientific conclusions.
2.) Experimental studies do not always yield higher-quality evidence than observational studies.
Additionally, the authors focus only the subset of experimental studies in Hilgard’s analysis. Although I agree that “experimental studies are the best way to completely eliminate uncontrolled confounds”, it is not at all clear that experimental lab studies provide the overall strongest evidence regarding violent video games and aggression. Typical randomized studies in the video game literature consist, for example, of exposing subjects to violent video games for 30 minutes, then immediately having them complete a lab outcome measure operationalizing aggression as the amount of hot sauce a subject chooses to place on another subject’s food. It is unclear to what extent one-time exposures to video games and lab measures of “aggression” have predictive validity for real-world effects of naturalistic exposure to video games. In contrast, a well-conducted case-control study with appropriate confounding control and assessing violent video game exposure in subjects with demonstrated violent behavior versus those without might in fact provide stronger evidence for societally relevant causal effects (e.g., Rothman et al., 2008).
3.) Effect sizes are inherently contextual.
Regarding the interpretation of small effect sizes, we did indeed state several times in our paper that the effect sizes are “almost always quite small”. However, to universally dismiss effect sizes of less than d = 0.10 as less than “the smallest effect size of practical importance” is too hasty. Exposures, such as violent video games, that have very broad outreach can have substantial effects at the population level when aggregated across many individuals (VanderWeele et al., 2019). The authors are correct that small effect sizes are in general less robust to potential methodological biases than larger effect sizes, but to reiterate the actual claim we made in our manuscript: “Our claim is not that our re-analyses resolve these methodological problems but rather that widespread perceptions of conflict among the results of these meta-analyses—even when taken at face value without reconciling their substantial methodological differences—may in part be an artifact of statistical reporting practices in meta-analyses.” Additionally, the comparison to effect sizes for psychic phenomena does not strike as particularly damning for the violent video game literature. The prior plausibility that psychic phenomena exist is extremely low, as the authors themselves describe, and it is surely much lower than the prior plausibility that video games might increase aggressive behavior. Extraordinary claims require extraordinary evidence, so any given effect size for psychic phenomena is much less credible than for video games.
Signed, Maya B. Mathur Department of Epidemiology Harvard University
References
Johnson, Valen, and Ying Yuan. "Comments on ‘An exploratory test for an excess of significant findings’ by JPA loannidis and TA Trikalinos." Clinical Trials 4.3 (2007): 254.
Rothman, K. J., Greenland, S., & Lash, T. L. (2008). Modern epidemiology (Vol. 3). Philadelphia: Wolters Kluwer Health/Lippincott Williams & Wilkins.
VanderWeele, T. J., Mathur, M. B., & Chen, Y. (2019). Media portrayals and public health implications for suicide and other behaviors. JAMA Psychiatry.
Tags
Annotators
URL
-
- Aug 2019
-
www.plantcell.org www.plantcell.org
-
See the peer review report here http://www.plantcell.org/content/plantcell/suppl/2019/08/25/tpc.19.00255.DC3/tpc19.00255.PeerReview.pdf
-
-
www.plantcell.org www.plantcell.org
-
Peer review report can be found here http://www.plantcell.org/content/plantcell/suppl/2019/07/16/tpc.19.00003.DC2/tpc19.00003.PeerReview.pdf
-
-
www.plantcell.org www.plantcell.org
-
Peer review report can be found here http://www.plantcell.org/content/plantcell/suppl/2019/07/17/tpc.18.00662.DC2/tpc18.00662.PeerReview.pdf
-
-
www.plantcell.org www.plantcell.org
-
Peer review report can be found here http://www.plantcell.org/content/plantcell/suppl/2019/07/03/tpc.19.00081.DC2/tpc19.00081.PeerReviewReport.pdf
-
-
www.plantcell.org www.plantcell.org
-
Peer review report can be found here http://www.plantcell.org/content/plantcell/suppl/2019/07/03/tpc.19.00089.DC2/tpc19.00089.PeerReview.pdf
-
-
www.plantcell.org www.plantcell.org
-
Peer review report can be found here http://www.plantcell.org/content/plantcell/suppl/2019/08/07/tpc.18.00974.DC2/tpc18.00974.PeerReview.pdf
-
-
www.plantcell.org www.plantcell.org
-
Peer Review Report can be found here http://www.plantcell.org/content/plantcell/suppl/2019/08/14/tpc.19.00314.DC1/tpc19.00314.PeerReview.pdf
-
- Jul 2019
-
www.plantcell.org www.plantcell.org
-
Check out the peer review report for this article: http://www.plantcell.org/content/plantcell/suppl/2019/07/12/tpc.18.00785.DC2/tpc18.00785.PeerReviewReport.pdf
-
-
www.plantcell.org www.plantcell.org
-
Check out the peer review report for this article: http://www.plantcell.org/content/plantcell/suppl/2019/04/09/tpc.18.00778.DC2/tpc18.00778.PeerReview.pdf
Tags
Annotators
URL
-
-
www.plantcell.org www.plantcell.org
-
Check out the peer review report for this article: http://www.plantcell.org/content/plantcell/suppl/2019/04/12/tpc.18.00946.DC2/tpc18.00946.PeerReview.pdf
-
-
www.plantcell.org www.plantcell.org
-
Check out the peer review report for this article: http://www.plantcell.org/content/plantcell/suppl/2019/04/17/tpc.18.00938.DC2/tpc18.00938.PeerReview.pdf
Tags
Annotators
URL
-
-
www.plantcell.org www.plantcell.org
-
Check out the peer review report for this article: http://www.plantcell.org/content/plantcell/suppl/2019/07/13/tpc.18.00606.DC2/tpc18.00606.PRR-Griffiths.pdf
Tags
Annotators
URL
-
-
www.plantcell.org www.plantcell.org
-
Check out the peer review report for this article: http://www.plantcell.org/content/plantcell/suppl/2019/07/02/tpc.19.00132.DC2/tpc19.00132.PeerReviewReport.pdf
-
-
www.plantcell.org www.plantcell.org
-
Check out the peer review report for this article: http://www.plantcell.org/content/plantcell/suppl/2019/07/13/tpc.19.00033.DC2/tpc19.00033.PRR-Niittyla.pdf
Tags
Annotators
URL
-
-
www.plantcell.org www.plantcell.org
-
Check out the peer review report for this article: http://www.plantcell.org/content/plantcell/suppl/2019/05/18/tpc.18.00043.DC2/tpc18.00043.PeerReviewReport.pdf
-
-
www.plantcell.org www.plantcell.org
-
Check out the peer review report for this article: http://www.plantcell.org/content/plantcell/suppl/2019/05/07/tpc.19.00047.DC2/tpc19.00047.PeerReview.pdf
-
-
www.plantcell.org www.plantcell.org
-
Check out the peer review report for this article: http://www.plantcell.org/content/plantcell/suppl/2019/05/29/tpc.19.00235.DC2/tpc19.00235.PeerReviewReport.pdf
-
-
www.plantcell.org www.plantcell.org
-
Check out the peer review report for this article: http://www.plantcell.org/content/plantcell/suppl/2019/07/13/tpc.18.00918.DC2/tpc18.00918.PRR-Yin.pdf
-
-
www.plantcell.org www.plantcell.org
-
Three-dimensional Time-lapse Analysis Reveals Multiscale Relationships in Maize Root Systems with Contrasting Architectures
Check out the peer review report for this article: http://www.plantcell.org/content/plantcell/suppl/2019/05/30/tpc.19.00015.DC2/tpc19.00015.PeerReview.pdf
-
-
www.plantcell.org www.plantcell.org
-
Check out the peer review report here http://www.plantcell.org/content/plantcell/suppl/2019/07/13/tpc.19.00069.DC2/tpc19.00069.PRR-Genschik.pdf
-
-
www.plantcell.org www.plantcell.org
-
A Series of Fortunate Events: Introducing Chlamydomonas as a Reference Organism
Check out the peer review report here http://www.plantcell.org/content/plantcell/suppl/2019/06/12/tpc.18.00952.DC2/tpc18.00952.PeerReview.pdf
-
-
www.plantcell.org www.plantcell.org
-
Check out the peer review report here http://www.plantcell.org/content/plantcell/suppl/2019/07/02/tpc.18.00840.DC2/tpc18.00840.PeerReviewReport.pdf
-
-
www.plantcell.org www.plantcell.org
-
Check out the peer review report here http://www.plantcell.org/content/plantcell/suppl/2019/07/02/tpc.18.00706.DC2/tpc18.00706.PeerReviewReport.pdf
-
-
www.plantcell.org www.plantcell.org
-
Chloroplast Outer Membrane β-Barrel Proteins Use Components of the General Import Apparatus
Check out the peer review report here https://doi.org/10.1105/tpc.19.00001
-
-
www.plantcell.org www.plantcell.org
-
Separating Golgi proteins from cis to trans reveals underlying properties of cisternal localization
Check out the peer review report here http://www.plantcell.org/content/plantcell/suppl/2019/07/03/tpc.19.00081.DC2/tpc19.00081.PeerReviewReport.pdf
-
- Apr 2019
- Feb 2019
-
www.plantcell.org www.plantcell.org
-
Interactions of tomato and Botrytis genetic diversity: Parsing the contributions of host differentiation, domestication and pathogen variation
This article has a Peer Review Report
-
-
www.plantcell.org www.plantcell.org
-
The systems architecture of molecular memory in poplar after abiotic stress
This article has a Peer Review Report
-
-
www.plantcell.org www.plantcell.org
-
Oscillating aquaporin phosphorylations and 14-3-3 proteins mediate circadian regulation of leaf hydraulics
This article has a Peer Review Report
-
- Jan 2019
-
scholarlykitchen.sspnet.org scholarlykitchen.sspnet.org
-
Web annotation, for example, is catching on as a new mode of collaboration, peer review, and other research functions.
-
-
academic.oup.com academic.oup.com
-
Ploidy and Size at Multiple Scales in the Arabidopsis Sepal
This article has a Peer Review Report
-
-
www.plantcell.org www.plantcell.org
-
HAF1 Modulates Circadian Accumulation of OsELF3 Controlling Heading Date Under Long-day Conditions in Rice
This article has a Peer Review Report
-
-
www.plantcell.org www.plantcell.org
-
The ZmbZIP22 Transcription Factor Regulates 27-kD γ-Zein Gene Transcription during Maize Endosperm Development
This article has a Peer Review Report
Tags
Annotators
URL
-
-
www.plantcell.org www.plantcell.org
-
Systemic Upregulation of MTP2- and HMA2-Mediated Zn Partitioning to the Shoot Supplements Local Zn Deficiency Responses
This article has a Peer Review Report
Tags
Annotators
URL
-
-
www.plantcell.org www.plantcell.org
-
A MPK3/6-WRKY33-ALD1-Pipecolic Acid Regulatory Loop Contributes to Systemic Acquired Resistance
This article has a Peer Review Report
Tags
Annotators
URL
-
-
www.plantcell.org www.plantcell.org
-
The Inhibitor Endosidin 4 Targets SEC7 Domain-Type ARF GTPase Exchange Factors and Interferes with Subcellular Trafficking in Eukaryotes
This article has a Peer Review Report
Tags
Annotators
URL
-
-
www.plantcell.org www.plantcell.org
-
Nonselective Chemical Inhibition of Sec7 Domain-Containing ARF GTPase Exchange Factors
This article has a Peer Review Report
Tags
Annotators
URL
-
-
www.plantcell.org www.plantcell.org
-
Opaque-2 Regulates a Complex Gene Network Associated with Cell Differentiation and Storage Functions of Maize Endosperm
This article has a Peer Review Report
Tags
Annotators
URL
-
-
www.plantcell.org www.plantcell.org
-
PAPST2 plays a critical role for PAP removal from the cytosol and subsequent degradation in plastids and mitochondria
This article has a Peer Review Report
-
-
www.plantcell.org www.plantcell.org
-
The Number of Meiotic Double-Strand Breaks Influences Crossover Distribution in Arabidopsis
This article has a Peer Review Report
Tags
Annotators
URL
-
-
www.plantcell.org www.plantcell.org
-
This article has a Peer Review Report
-
-
www.plantcell.org www.plantcell.org
-
SlMYC1 Regulates Type VI Glandular Trichome Formation and Terpene Biosynthesis in Tomato Glandular Cells
This article has a Peer Review Report
-
-
www.plantcell.org www.plantcell.org
-
Inferring Roles in Defense from Metabolic Allocation of Rice Diterpenoids
This paper has a Peer Review Report
Tags
Annotators
URL
-
-
www.ncbi.nlm.nih.gov www.ncbi.nlm.nih.gov
-
The OsRR24/LEPTO1 Type-B Response Regulator is Essential for the Organization of Leptotene Chromosomes in Rice Meiosis
This article has a Peer Review Report
-
-
www.plantcell.org www.plantcell.org
-
A Robust Auxin Response Network Controls Embryo and Suspensor Development through a bHLH Transcriptional Module
This article has a Peer Review Report
-
-
www.plantcell.org www.plantcell.org
-
The Role of Abscisic Acid Signaling in Maintaining the Metabolic Balance Required for Arabidopsis Growth under Non-stress Conditions
This article has a Peer Review Report
-
-
www.plantcell.org www.plantcell.org
-
HOMEOBOX PROTEIN52 Mediates the Crosstalk between Ethylene and Auxin Signaling during Primary Root Elongation by Modulating Auxin Transport-Related Gene Expression
This article has a Peer Review Report
Tags
Annotators
URL
-
-
theconversation.com theconversation.com
-
Banyak kelemahan yang ditemukan Dian ketika menelaah proposal penelitian yang masuk. Ide yang ditawarkan banyak yang kurang kreatif dan aktual. Ada juga yang hanya merupakan duplikasi atau daur ulang dari penelitian sebelumnya.
Apakah hasil peninjauan ini terbuka untuk umum dan diberikan juga kepada peneliti? Maaf kalau saya keliru, saya setiap tahun mengirimkan proposal ke Kemristekdikti, tapi hasil peninjauan secara lengkap belum pernah saya terima.
-
-
www.plantcell.org www.plantcell.org
-
The Receptor-like Pseudokinase GHR1 Is Required for Stomatal Closure
Please find a Peer Review Report here.
The report shows the major requests for revision and author responses. Minor comments for revision and miscellaneous correspondence are not included. The original format may not be reflected in this compilation, but the reviewer comments and author responses are not edited, except to correct minor typographical or spelling errors that could be a source of ambiguity.
Tags
Annotators
URL
-
- Oct 2018
-
fossilsandshit.com fossilsandshit.com
-
open peer review model
-
- May 2018
-
-
“OER are not typically counted toward research requirements, because they are seen as lacking the vetting process that comes with, for example, peer-reviewed articles.”
-
- Mar 2018
-
www.sciencemag.org www.sciencemag.org
-
In what appears to be a first, a U.S. court is forcing a journal publisher to breach its confidentiality policy and identify an article's anonymous peer reviewers.
Wow. This could have a chilling effect on reviews for certain subjects.
-
-
pdxscholar.library.pdx.edu pdxscholar.library.pdx.edu
-
Keeping Up with...Open Peer Review
-
-
thatpsychprof.com thatpsychprof.com
-
By asking my students to craft and peer-review multiple-choice questions based on the concepts covered that week (and scaffolding this process over the semester)
Este párrafo muestra el "ingrediente" de los 8 ingredientes de la pedagogia abierta de Peer review al hacer que los alumnos colaboren en el proceso de evaluación.
-
- Feb 2018
-
mvolmar.gsucreate.org mvolmar.gsucreate.org
-
Behind all the things on the panel is a pinkish/peach layer.
I would avoid using the words "things."
-
Lettering It spells out the word Mitchell David Mucha M.D., in white with a pinkish/peach boarder. The stitching on the words is very rough. The letters are very huge and take up a majority of the space on the panel.
I would maybe try to add transition sentences to more smoothly transition between ideas.
-
idea of the Stethoscope was from René Théophile Hyacinthe Laënnec
Good job including links and images on the page to give the reader a fuller experience.
-
This long part of it is the same color as the bag except it has more of rough touch to it.
There is a couple grammar errors. Here there should be an "a" before "rough." "...more of a rough touch to it."
-
About The Panel
Before describing the panel, I would give the audience some background information on The Quilt, such as the founding or what each panel means.
-
The orginal Doctor’s Bag was the Gladstone which was made in the mid nineteen century by J.G. Beard. It was used for house visits to patients house. The contents inside of the bag is medical tools like stethoscope, clinical thermometer and tongue depressor some form of illumination, such as a torch, plessor, ophthalmoscope and auriscope; a test tube or two; and bottles of Benedict’s reagent and acetic acid to complete the kit (RACGP).
Good job giving some background information the Doctor's Bag.
-
The items that I will be describing from the panel is a doctor bag, stethoscope, the colors, and finally the lettering. I will be describing them in the same order I have mentioned them.
I would avoid writing in the third person. (Not using "I")
-
-
rionagtaylor.gsucreate.org rionagtaylor.gsucreate.org
-
The top center panel, belonging to Eddie (no last name reported), has a mosaic background of 6″x 6″ burgundy, soft dusty rose, light bubblegum pink, and sapphire blue squares. His name is then sewn in large, cursive lettering across the top left half of the panel.
This description makes me feel as if I'm looking at the panel. Love it
-
Since the panels would be featured in the Quilt as a visual memorial and not as a blanket, I wondered why the panels that were predominantly paintings were not made of canvas fabric instead of fabrics associated with apparel, or at least primed with some kind of Gesso to preserve the piece. I am by no means an expert, but as an artist who has experimented with different mediums on both primed and un-primed fabrics, I can attest for the value of using the right mediums on their respective materials. Though I am sure acceptable fabric paints were mostly used, I could tell where they were not.
Great connections between your experience as an artist and what you've observed from the quilt. I enjoy the objectivity rather than simply taking the panel for what it is.
-
Although tied by a similar tragedy, each panel exhumes individuality through applying different artistic methods.
The individuality of the panels is nicely described and understood. From the previous description, there is distinct differences yet similarity in the pieces.
-
-
cameronchasebrown.gsucreate.org cameronchasebrown.gsucreate.org
-
the rainbow in the body of the image is shown vibrantly in the majority of picture which by its colorful characteristics
Was there any writing? Any markings? Differences in stitches? Add more details, even mentioning the lack of detail ( Ex: stating the panel has no visible markings...) to give the reader a fuller image.
-
Another important visual element that I detected within this image was the blue bracelet circling the shiny arm holding the dog.
Were there any additional objects (letters, pictures, notes,ETC) that came with the panel? Maybe there will be some clues as to what the dog or bracelet means.
-
This gives the rainbow an ever greater meaning, maybe Jimmy had an aspiration for music and pursued a career in one, or possibly just has a respect for the fine musical arts.
From reading this I can tell you are really starting to question and investigate this panel as well as, Jimmy Popejoy.
-
hings within the panel that I took notice of immediately was the dog that was held by the shiny arm.
I like that you started with the most attention grabbing thing to you. However, I think you should first introduce The AIDS Quilt. That way the reader will understand what it is.
-
The panel base is mostly a pure, bloody, vibrant red color. This is also in the material of somewhat a shiny, soft, velvet material that vividly gives the panel some extravagant flare.
Love the word choice, the description brings a vivid image to mind and flows smoothly.
-