2 Matching Annotations
  1. Oct 2020
    1. And though flags from this software don’t automatically mean students will be penalized—instructors can review the software’s suspicions and decide for themselves how to proceed—it leaves open the possibility that instructors’ own biases will determine whether to bring academic dishonesty charges against students. Even just an accusation could negatively affect a student’s academic record, or at the very least how their instructor perceives them and their subsequent work.

      The companies are hiding behind this as a feature - that the algorithms are not supposed to be implemented without human review. I wonder how this "feature" will interact with implicit (and explicit) biases, or with the power dynamics between adjuncts, students, and departmental administration.

      The companies are caught between a rock and a hard place in the decision whether students should be informed that their attempt was flagged for review, or not. We see that, if the student is informed, it causes stress and pain and damage to the teacher-student relationship. But if they're not informed, all these issues of bias and power become invisible.

  2. Jun 2020