4 Matching Annotations
  1. Last 7 days
    1. Iterative assessments, where proposals are submitted and may be accepted, rejected or provided with feedback for the investigators to respond to, before being re-considered for funding are also being investigated with through the indigenous funding space at the Canadian Institutes of Health Research. There was a similar process formerly used in the Randomized Controlled Trials Committees of their old Open Operating Grants Program called UCR (under Continuing Review), where if the committee had simple questions that could make or break a proposal, they could rate the application provisionally based on satisfactory response to the questions. If the application then fell within the funding cut-off, the applicant would receive the question(s) and have 5 business days to respond. If the response was satisfactory they would get funded, if it wasn’t they would be deemed unfundable and would need to re-apply in a future competition.

      It's not clear to me that this is distributed peer review?

    2. The Alexander von Humboldt Foundation in Germany is currently experimenting with combined peer review formats, and has already had positive experiences. Specifically, instead of requesting peer reviews for proposals separately, the review could be done combinedly on a digital, interactive platform where researchers exchange their reviews/comments and get to discuss about the quality and innovativeness of proposals. This way, the reviewers could also directly compare the proposals with each other and rank them. https://www.humboldt-foundation.de/en/explore/figures-and-statistics/evaluation/evaluation-of-the-2022-peer-circle-experiment

      This is a repetition.

    3. Mitigation: Implement a double-blind review process where both reviewer and applicant identities are anonymized. Provide bias-awareness training as part of the review onboarding.

      Maybe we should add something about dividing the applicants into two pools with separate budgets and then let the them evaluate applications from the other pool. In this way they never evaluate the competitors. I think NWO has tried this.

    1. Discussion

      It is a bit unclear from the title that we mean discussion/interview with the applicant (and not just discussion among the reviewers). Maybe rephrase to Interview before scoring?