5 Matching Annotations
  1. Jan 2022
    1. Depuis longtemps, je suis d’avis que la rigueur d’un cours ne se mesure pas à la quantité de connaissances dont l’enseignant fait étalage, mais aux apprentissages que les étudiants font.

      Which can lead to an assessment of pedagogical efficacy. It's funny, to me, that those who complain about "grade inflation" (typically admins) rarely entertain the notion that grades could be higher than usual if the course went well. The situation is quite different in "L&D" (Learning and Development, typically for training and professional development in an organizational context). "Oh, great! We were able to get everyone to reach the standard for this competency! Must mean that we've done something right in our Instructional Design!"

  2. Nov 2021
  3. Sep 2021
  4. Oct 2020
    1. And though flags from this software don’t automatically mean students will be penalized—instructors can review the software’s suspicions and decide for themselves how to proceed—it leaves open the possibility that instructors’ own biases will determine whether to bring academic dishonesty charges against students. Even just an accusation could negatively affect a student’s academic record, or at the very least how their instructor perceives them and their subsequent work.

      The companies are hiding behind this as a feature - that the algorithms are not supposed to be implemented without human review. I wonder how this "feature" will interact with implicit (and explicit) biases, or with the power dynamics between adjuncts, students, and departmental administration.

      The companies are caught between a rock and a hard place in the decision whether students should be informed that their attempt was flagged for review, or not. We see that, if the student is informed, it causes stress and pain and damage to the teacher-student relationship. But if they're not informed, all these issues of bias and power become invisible.

  5. Jun 2020