55 Matching Annotations
  1. Jun 2023
    1. Todd Henry in his book The Accidental Creative: How to be Brilliant at a Moment's Notice (Portfolio/Penguin, 2011) uses the acronym FRESH for the elements of "creative rhythm": Focus, Relationships, Energy, Stimuli, Hours. His advice about note taking comes in a small section of the chapter on Stimuli. He recommends using notebooks with indexes, including a Stimuli index. He says, "Whenever you come across stimuli that you think would make good candidates for your Stimulus Queue, record them in the index in the front of your notebook." And "Without regular review, the practice of note taking is fairly useless." And "Over time you will begin to see patterns in your thoughts and preferences, and will likely gain at least a few ideas each week that otherwise would have been overlooked." Since Todd describes essentially the same effect as @Will but without mentioning a ZK, this "magic" or "power" seems to be a general feature of reviewing ideas or stimuli for creative ideation, not specific to a ZK. (@Will acknowledged this when he said, "Using the ZK method is one way of formalizing the continued review of ideas", not the only way.)

      via Andy

      Andy indicates that this review functionality isn't specific to zettelkasten, but it still sits in the framework of note taking. Given this, are there really "other" ways available?

  2. Oct 2022
  3. Apr 2022
    1. doi: https://doi.org/10.1038/d41586-021-02346-4

      https://www.nature.com/articles/d41586-021-02346-4

      Oddly this article doesn't cover academia.edu but includes ResearchGate which has a content-sharing partnership with the publisher SpringerNature.

      Matthews, D. (2021). Drowning in the literature? These smart software tools can help. Nature, 597(7874), 141–142. https://doi.org/10.1038/d41586-021-02346-4

    2. Open Knowledge Maps, meanwhile, is built on top of the open-source Bielefeld Academic Search Engine, which boasts more than 270 million documents, including preprints, and is curated to remove spam.

      Open Knowledge Maps uses the open-source Bielefeld Academic Search Engine and in 2021 indicated that it covers 270 million documents including preprints. Open Knowledge Maps also curates its index to remove spam.


      How much spam is included in the journal article space? I've heard of incredibly low quality and poorly edited journals, so filtering those out may be fairly easy to do, but are there smaller levels of individual spam below that?

    3. Another visual-mapping tool is Open Knowledge Maps, a service offered by a Vienna-based not-for-profit organization of the same name. It was founded in 2015 by Peter Kraker, a former scholarly-communication researcher at Graz University of Technology in Austria.

      https://openknowledgemaps.org/

      Open Knowledge maps is a visual literature search tool that is based on keywords rather than on a paper's title, author, or DOI. The service was founded in 2015 by Peter Kraker, a former scholarly communication researcher at Graz University of Technology.

  4. Dec 2021
    1. AIMOS. (2021, November 30). How can we connect #metascience to established #science fields? Find out at this afternoon’s session at #aimos2021 Remco Heesen @fallonmody Felipe Romeo will discuss. Come join us. #OpenScience #OpenData #reproducibility https://t.co/dEW2MkGNpx [Tweet]. @aimos_inc. https://twitter.com/aimos_inc/status/1465485732206850054

  5. Jul 2021
  6. Jun 2021
  7. May 2021
  8. Apr 2021
    1. Die weitestgehende Öffnung liegt bei dieser Variante vor, wenn sowohl Autor*innen- wie auch Gutachter*innen- und Gutachtentransparenz besteht. Offene Review-Verfahren schließen ferner die Option einer nachträglichen Veröffentlichung der Gutachten als Begleittexte einer Publikation mit ein

      Volle Transparenz wäre m.E. erst dann gegeben, wenn auch abgelehente Einreichungen mitsamt der der Gutachten, die zur Ablehnung geführt haben ins Netz gestellt werden. Mir scheint, um Meinungs- oder Zitationskartelle zu verhindern (oder zumindest offensichtlich werden zu lassen), wäre das sogar wichtiger als die Namen der Gutachter anzugeben.

  9. Mar 2021
    1. here is my set of best practices.I review libraries before adding them to my project. This involves skimming the code or reading it in its entirety if short, skimming the list of its dependencies, and making some quality judgements on liveliness, reliability, and maintainability in case I need to fix things myself. Note that length isn't a factor on its own, but may figure into some of these other estimates. I have on occasion pasted short modules directly into my code because I didn't think their recursive dependencies were justified.I then pin the library version and all of its dependencies with npm-shrinkwrap.Periodically, or when I need specific changes, I use npm-check to review updates. Here, I actually do look at all the changes since my pinned version, through a combination of change and commit logs. I make the call on whether the fixes and improvements outweigh the risk of updating; usually the changes are trivial and the answer is yes, so I update, shrinkwrap, skim the diff, done.I prefer not to pull in dependencies at deploy time, since I don't need the headache of github or npm being down when I need to deploy, and production machines may not have external internet access, let alone toolchains for compiling binary modules. Npm-pack followed by npm-install of the tarball is your friend here, and gets you pretty close to 100% reproducible deploys and rollbacks.This list intentionally has lots of judgement calls and few absolute rules. I don't follow all of them for all of my projects, but it is what I would consider a reasonable process for things that matter.
  10. Feb 2021
  11. Oct 2020
  12. Sep 2020
  13. Aug 2020
  14. Jun 2020
  15. May 2020
  16. Apr 2020
  17. Feb 2020
  18. Dec 2019
    1. Supplementary data

      Of special interest is that a reviewer openly discussed in blog his general thoughts about the state of the art in the field based on what he had been looking at in the paper. This blog came out just after he completed his 1st round review, and before an editorial decision was made.

      http://ivory.idyll.org/blog/thoughts-on-assemblathon-2.html

      This spawned additional blogs that broadened the discussion among the community-- again looking toward the future.<br> See: https://www.homolog.us/blogs/genome/2013/02/23/titus-browns-thoughts-on-the-assemblathon-2-paper/

      And

      https://flxlexblog.wordpress.com/2013/02/26/on-assembly-uncertainty-inspired-by-the-assemblathon2-debate/

      Further the authors, now in the process of revising their manuscript, joined in on twitter, reaching out to the community at large for suggestions on revisions, and additional thoughts. Their paper had been posted in arxiv- allowing for this type of commenting and author/reader interaction See: https://arxiv.org/abs/1301.5406

      The Assemblathon.org site collected and presented all the information on the discussion surrounding this article. https://assemblathon.org/page/2

      A blog by the editors followed all this describing this ultra-open peer review, highlighting how these forms of discussions during the peer review process ended up being a very forward-looking discussion about the state of based on what the reviewers were seeing in this paper, and the directions the community should now focus on. This broader open discussion and its very positive nature could only happen in an open, transparent, review process. See: https://blogs.biomedcentral.com/bmcblog/2013/07/23/ultra-open-peer-review/

  19. Oct 2019
    1. A Million Brains in the Cloud

      Arno Klein and Satrajit S. Gosh published this research idea in 2016 and opened it to review. In fact, you could review their abstract directly in RIO, but for the MOOC activity "open peer review" we want you to read and annotate their proposal using this Hypothes.is layer. You can add annotations by simply highlighting a section that you want to comment on or add a page note and say in a few sentences what you think of their ideas. You can also reply to comments that your peers have already made. Please sign up to Hypothes.is and join the conversation!

  20. Oct 2018
  21. Mar 2018
  22. Jun 2017
    1. protected platform whereby many expert reviewers could read and comment on submissions, as well as on fellow reviewers’ comments

      Conduct prepeer review during the manuscript development on a web platform. That is what is happening in Therapoid.net.

    2. intelligent crowd reviewing

      Crowdsourcing review? Prepeer review as precursor to preprint server.

  23. Mar 2017
    1. Eve Marder, a neurobiologist at Brandeis University and a deputy editor at eLife, says that around one third of reviewers under her purview sign their reviews.

      Perhaps these could routinely become page notes?

    2. If Kriegeskorte is invited by a journal to write a review, first he decides whether he’s interested enough to review it. If so, he checks whether there’s a preprint available—basically a final draft of the manuscript posted publicly online on one of several preprint servers like arxiv and biorxiv. This is crucial. Writing about a manuscript that he’s received in confidence from a journal editor would break confidentiality—talking about a paper before the authors are ready. If there’s a preprint, great. He reviews the paper, posts to his blog, and also sends the review to the journal editor.

      Interesting workflow and within his rights.

    3. The tweet linked to the blog of a neuroscientist named Niko Kriegeskorte, a cognitive neuroscientist at the Medical Research Council in the UK who, since December 2015, has performed all of his peer review openly.

      Interesting...

  24. Jan 2017
  25. Oct 2016
  26. Feb 2016
    1. As I have mentioned in previous posts, several platforms have appeared recently that could take on this role of third-party reviewer. I could imagine at least: libreapp.org, peerevaluation.org, pubpeer.com, and publons.com. Pandelis Perakakis mentioned several others as well: http://thomas.arildsen.org/2013/08/01/open-review-of-scientific-literature/comment-page-1/#comment-9.
  27. Jan 2016
    1. Below I list a few advantages and drawbacks of anonymity where I assume that a drawback of anonymous review is an advantage of identified review and vice versa. Drawbacks Reviewers do not get credit for their work. They cannot, for example, reference particular reviews in their CVs as they can with publications. It is relatively “easy” for a reviewer to provide unnecessarily blunt or harsh critique. It is difficult to guess if the reviewer has any conflict of interest with the authors by being, for example, a competing researcher interested in stalling the paper’s publication. Advantages Reviewers do not have to fear “payback” for an unfavourable review that is perceived as unfair by the authors of the work. Some (perhaps especially “high-profile” senior faculty members) reviewers might find it difficult to find the time to provide as thorough a review as they would ideally like to, yet would still like to contribute and can perhaps provide valuable experienced insight. They can do so without putting their reputation on the line.
    1. With most journals, if I submit a paper that is rejected, that information is private and I can re-submit elsewhere. In open review, with a negative review one can publicly lose face as well as lose the possibility of re-submitting the paper. Won’t this be a significant disincentive to submit? This is precisely what we are trying to change. Currently, scientists can submit a paper numerous times, receive numerous negative reviews and ultimately publish their paper somewhere else after having “passed” peer review. If scientists prefer this system then science is in a dangerous place. By choosing this model, we as scientists are basically saying we prefer nice neat stories that no one will criticize. This is silly though because science, more often than not, is not neat and perfect. The Winnower believes that transparency in publishing is of the utmost importance. Going from a closed anonymous system to an open system will be hard for many scientists but I believe that it is the right thing to do if we care about the truth.
    2. PLOS Labs is working on establishing structured reviews and we have talked with them about this.
    3. It should be noted that papers will always be open for review so that a paper can accumulate reviews throughout its lifetime.
  28. Dec 2015
    1. We believe that openness and transparency are core values of science. For a long time, technological obstacles existed preventing transparency from being the norm. With the advent of the internet, however, these obstacles have largely disappeared. The promise of open research can finally be realized, but this will require a cultural change in science. The power to create that change lies in the peer-review process.

      We suggest that beginning January 1, 2017, reviewers make open practices a pre-condition for more comprehensive review. This is already in reviewers’ power; to drive the change, all that is needed is for reviewers to collectively agree that the time for change has come.

  29. May 2015
    1. Author and peer reviewer anonymity haven’t been shown to have an overall benefit, and they may cause harm. Part of the potential for harm is if journals act as though it’s a sufficiently effective mechanism to prevent bias.
    2. Peer reviewers were more likely to substantiate the points they made (9, 14, 16, 17) when they knew they would be named. They were especially likely to provide extra substantiation if they were recommending an article be rejected, and they knew their report would be published if the article was accepted anyway (9, 15).