414 Matching Annotations
  1. Jul 2015
    1. Put simply, various stakeholders seem to have different perspectives on how research assessment works currently and how it should work in the future. In order to move forward, we must first identify and then address a number of misunderstandings.

      This might be of interest, a project that we've put together for the Scholarly Communication Institute this year titled "The Qualities of Quality – Validating and justifying digital scholarship beyond traditional values frameworks": http://trianglesci.org/2015/05/15/the-qualities-of-quality/

    1. Whether or not you take a constructivist view of education, feedback on performance is inevitably seen as a crucial component of the process. However, experience shows that students (and academic staff) often struggle with feedback, which all too often fails to translate into feed-forward actions leading to educational gains. Problems get worse as student cohort sizes increase. By building on the well-established principle of separating marks from feedback and by using a social network approach to amplify peer discussion of assessed tasks, this paper describes an efficient system for interactive student feedback. Although the majority of students remain passive recipients in this system, they are still exposed to deeper reflection on assessed tasks than in traditional one-to-one feedback processes.
  2. May 2015
    1. Broockman has ideas about how to reform things. He thinks so-called “post-publication peer review” — a system which, to oversimplify a bit, makes it easier to evaluate the strength of previously published findings — has promise.
    1. Author and peer reviewer anonymity haven’t been shown to have an overall benefit, and they may cause harm. Part of the potential for harm is if journals act as though it’s a sufficiently effective mechanism to prevent bias.
    2. Peer reviewers were more likely to substantiate the points they made (9, 14, 16, 17) when they knew they would be named. They were especially likely to provide extra substantiation if they were recommending an article be rejected, and they knew their report would be published if the article was accepted anyway (9, 15).
  3. Feb 2014
    1. Alternatively, Daphne Koller and Andrew Ng who are the founders of Coursera, a Stanford MOOC startup, have decided to use peer evaluation to assess writing. Koller and Ng (2012) specifically used the term “calibrated peer review” to refer to a method of peer review distinct from an application developed by UCLA with National Science Foundation funding called Calibrated Peer Review™ (CPR). For Koller and Ng, “calibrated peer review” is a specific form of peer review in which students are trained on a particular scoring rubric for an assignment using practice essays before they begin the peer review process.
  4. Jan 2014
    1. This suggests that peer production will thrive where projects have three characteristi cs

      If thriving is a metric (is it measurable? too subjective?) of success then the 3 characteristics it must have are:

      • modularity: divisible into components
      • granularity: fine-grained modularity
      • integrability: low-cost integration of contributions

      I don't dispute that these characteristics are needed, but they are too general to be helpful, so I propose that we look at these three characteristics through the lens of the type of contributor we are seeking to motivate.

      How do these characteristics inform what we should focus on to remove barriers to collaboration for each of these contributor-types?

      Below I've made up a rough list of lenses. Maybe you have links or references that have already made these classifications better than I have... if so, share them!

      Roughly here are the classifications of the types of relationships to open source projects that I commonly see:

      • core developers: either hired by a company, foundation, or some entity to work on the project. These people care most about integrability.

      • ecosystem contributors: someone either self-motivated or who receives a reward via some mechanism outside the institution that funds the core developers (e.g. reputation, portfolio for future job prospects, tools and platforms that support a consulting business, etc). These people care most about modularity.

      • feature-driven contributors: The project is useful out-of-the-box for these people and rather than build their own tool from scratch they see that it is possible for the tool to work they way they want by merely contributing code or at least a feature-request based on their idea. These people care most about granularity.

      The above lenses fit the characteristics outlined in the article, but below are other contributor-types that don't directly care about these characteristics.

      • the funder: a company, foundation, crowd, or some other funding body that directly funds the core developers to work on the project for hire.

      • consumer contributors: This class of people might not even be aware that they are contributors, but simply using the project returns direct benefits through logs and other instrumented uses of the tool to generate data that can be used to improve the project.

      • knowledge-driven contributors: These contributors are most likely closest to the ecosystem contributors, maybe even a sub-species of those, that contribute to documentation and learning the system; they may be less-skilled at coding, but still serve a valuable part of the community even if they are not committing to the core code base.

      • failure-driven contributors: A primary source of bug reports and may also be any one of the other lenses.

      What other lenses might be useful to look through? What characteristics are we missing? How can we reduce barriers to contribution for each of these contributor types?

      I feel that there are plenty of motivations... but what barriers exist and what motivations are sufficient for enough people to be willing to surmount those barriers? I think it may be easier to focus on the barriers to make contributing less painful for the already-convinced, than to think about the motivators for those needing to be convinced-- I think the consumer contributors are some of the very best suited to convince the unconvinced; our job should be to remove the barriers for people at each stage of community we are trying to build.

      A note to the awesome folks at Hypothes.is who are reading our consumer contributions... given the current state of the hypothes.is project, what class of contributors are you most in need of?

    2. the proposition that diverse motivations animate human beings, and, more importantly, that there exist ranges of human experience in which the presence of monetary rewards is inversely related to the presence of other, social-psychological rewards.

      The first analytic move.

    3. common appropriation regimes do not give a complete answer to the sustainability of motivation and organization for the truly open, large-scale nonproprietary peer production projects we see on the Internet.

      Towards the end of our last conversation the text following "common appropriation" seemed an interesting place to dive into further for our future discussions.

      I have tagged this annotation with "meta" because it is a comment about our discussion and where to continue it rather than an annotation focused on the content itself.

      In the future I would be interested in exploring the idea of "annotation types" that can be selectively turned on and off, but for now will handle that with ad hoc tags like "meta".

    4. understanding that when a project of any size is broken up into little pieces, each of which can be performed by an individual in a short amount of time, the motivation to get any given individual to contribute need only be very small.

      The second analytic move.