12 Matching Annotations
  1. Dec 2025
    1. Students might already understand this, as many feel ashamed and will lie about their AI use. Few people want to associate with (including rent to, work with, invest in, etc.) cheaters and other unethical types

      I am on the fence about this comment. I believe that if people are desperate, they will do whatever that can to get the marks. I also think some people are just lazy and will do whatever they can to get it over with

    2. But even if you can whip out a phone-calculator from your pocket, that extra step is still a hurdle that disincentivizes actually doing that work.

      While a phone calculator makes math instantly accessible, the act of pulling it out and opening the app introduces an extra step that discourages people from actually using it. This reinforces the laziness in people and that convenience means everything.

    3. Working out your body at a gym vs. working out your mind in school. One possibly relevant difference is that going to the gym usually isn’t a de facto requirement for “good” jobs and careers; so, even if we wouldn’t send a robot in our place in a gym, maybe it’s easier to justify AI cheating in school if the stakes are higher?

      Education is often framed as a gateway to “good” jobs, the stakes are undeniably higher. That difference helps explain why some might justify using AI to cheat in school. At times the pressure to succeed academically can outweigh concerns about authenticity. The analogy emphasizes that while we wouldn’t send a robot to lift weights for us at the gym, the temptation to let AI “do the work” in school feels stronger, precisely because the outcomes are more consequential.

    4. As a personal example, in experimenting with ChatGPT, I tried several times to get it to generate a multiple-choice exam about this essay, and it failed miserably—it looked reasonable if you weren’t too familiar with the material, but it really couldn’t zero in on what the important, relevant things were to test for; and so, all the quiz questions (and everything else in this course) are fully human-created, not just human-edited or -curated.

      I have been in this position before and it is so frustrating. I attended workshops about using AI for rubrics and I tried so many times and it would not highlight the learning outcomes of the assignment. At this point it took me longer to edit the AI produced rubric rather than creating it myself

    5. Both mirrors and AI can be mesmerizing, causing us to fall in love with our own reflection, like Narcissus in Ancient Greek mythology. LLMs can do this because they’re generally designed to flatter and anticipate what the user wants (to keep the user engaged). This can generate seemingly deep conversations that lead the user to think some magical awakening is happening with AI, as if it were an actual mind.

      The reference to Narcissus in Greek mythology emphasizes the danger of becoming enamored with one’s own reflection—whether literal or digital. The text critiques how LLMs are designed to flatter and anticipate user desires, which can create the illusion of profound, mind-like conversations. The annotation suggests that this effect is not genuine intelligence but rather a psychological mirroring mechanism that risks misleading users into attributing consciousness or depth to AI systems.

    6. AI cheating is disrespectful to instructors. While it’s part of our jobs to read entire essays written by students, it’s a complete waste of an instructor’s time to read and evaluate an essay written by AI but pawned off as a student’s own work.

      I totally agree with this. When a student submits AI-generated work as their own, it misuses the instructor’s time and undermines the trust that’s essential in a learning environment. Instructors spend significant energy reading, assessing, and giving feedback with the assumption that the writing represents a student’s genuine thinking. I feel like many professors were in school themselves at one more and put forward the effort to get where they are today and expect that from students aswell

    7. Similar to mirrors that can only reflect the light they receive, LLMs are confined to operating within their training data and whatever other data they have access to, such as the open internet (which still has lots of errors and information gaps since it’s so biased toward Western traditions and knowledge). But humans aren’t bounded by their past; we can extrapolate into the future, into the unknown. Unlike LLMs, we can venture into truly novel territory and ideas, as well as reason from first principles.

      The biggest claim here is that humans are uniquely able to imagine, infer, and reason beyond what we’ve already seen. This suggests that genuine creativity and forward-thinking are human strengths that cannot be outsourced, no matter how advanced AI becomes

    8. This AI dependency also includes being less able to read by yourself; you might be anxious right now that this article isn’t an outline with catchy sub-headlines and emojis, or that it’s so long. While outlines and bullet points are faster to read and make it easier to see how the discussion flows, stripping down an article to its bare bones can flatten the discussion too much, causing nuance or details to be lost and moving too quickly to be digested. It’s the equivalent of wolfing down fast-food instead of savoring a memorable meal that was prepared with care and expertise.

      This makes me reflect on how often I skim or rely on shortcuts instead of engaging with full texts. The paragraph also hints at a bigger concern: if students become too dependent on AI to simplify everything, they may lose the stamina and skill needed for close reading, critical thinking, and grappling with nuance. In this way, the issue isn’t just about AI use, it’s about how our reading habits and expectations are being reshaped, possibly at the cost of deeper comprehension.

    9. also important for students to come to the same conclusion themselves and understand the rationale for this policy.

      This shows the importance of implementing some co created rules and expectations in the classroom

    10. If it’s a game-changer that will be regularly used in future jobs, then students will need to know how to use it expertly

      There have been many times where I use AI to create lesson plans for rubrics for my class. Since students may use this in the future, it is important for them to know how to use it properly which is where we come in as educators

    11. writing that looks or sounds like AI writing may also be penalized as poor writing because it doesn’t stand out as your authentic voice,

      This comment pushes me to think about the concept of “authentic voice.” Lin implies that AI-generated text can flatten individuality, making everyone’s writing sound the same. AI usually defaults to generic, polished, overly formal

    12. believing that AI tools would speed up work by 20%, when in reality they slowed down their work by about 20% for a range of reasons, including having to fix AI errors.

      AI does speed up tasks. But if we simply outsource all time-consuming work to AI, we sacrificing our own learning and it will end up taking more time since you will have to edit the AI work.