5 Matching Annotations
  1. Last 7 days
    1. For instance, imagine that AI use is banned in the classroom (as it is in ours), but the instructor secretly used AI to give feedback on your assignments, even though you are required to put in the work yourself.

      What about when AI is banned from a class, but professor is AI detector that AI.

      Maybe use Tavare’s experience with his professor in the “essay”

    2. AI cheating is disrespectful to instructors. While it’s part of our jobs to read entire essays written by students, it’s a complete waste of an instructor’s time to read and evaluate an essay written by AI but pawned off as a student’s own work. As this writer puts it, “AI writing, meanwhile, is a cognitive pyramid scam. It’s a fraud on the reader. The writer who uses AI is trying to get the reader to invest their time and attention without investing any of their own.”

      I agree. But at the same time this part contradicts what is said above about AI writing not being good. If it’s not good, why would a AI written paper have a high grade?

    3. Even though AI detectors are still not foolproof, they’re getting better and better with new detection methods emerging quickly, with new ones possibly coming out as soon as tomorrow.

      But that rise the concern that AI detector are AI tools. The same way that generative way is wrong all the time, so is AI detectors.

    4. This means LLMs are essentially glorified mirrors, reflecting our own words and images back to us from different angles, as philosopher Shannon Vallor has argued.

      AI tools get information from other sources. This is why it should be used to throw ideas around. And not ask for the actual ideas to to create the actual product.