23 Matching Annotations
  1. Apr 2026
    1. AI serves as a new kind of participantin the learning process: a participant that can offerinstant feedback, generate creative prompts, andeven model revision strategies at any moment in thewriting process

      Treating AI as a "writing partner" or coach" rather than a replacement for the writer original thought.

    2. We are both con-cerned that the struggle with writing, the effort ittakes to make a decision about the next best step inour piece, remains part of the process of writing

      If Ai makes writing too easy we lose out on the part that teaches us how to think. Ai should assist not bypass.

    3. We used ChatGPTto save a bit of our mental energy trying to think upour students’ next steps and spent that energy insteaddiscussing the best options among the next steps thisprogram was suggesting

      Ai handles the "boring work" of generating basics ideas, allowing the teacher or student to focus on the higher level work of seeing which idea is actually the best.

    4. the setting, her main character was male, the twistwas more involved, and she removed the supernatu-ral elements entirely. She only needed ChatGPT toflood her with ideas.

      2nd part of highlighted text

    5. In her final draft, much of what the studentincorporated significantly deviated from ChatGPT’ssuggestions

      The Student used Ai to "flood her with ideas". the student changed the characters and plot mantaining true authorship.

    6. to what changes are suggested and whether they rep-resent the writer’s true intentions.

      When using Ai for Grammar or editing, the writer's job is to ensure the corrected version doesn't change their meaning. The writer is the final judge of Intention.

    7. The primary function of anLLM is to predict the next word in a sequence basedon the words that came before it (think about theword suggestions when you text).

      Ai dose not think, it just predicts patterns. this explains why it can be wrong even when it sounds right. This is also Called a Bullshitter in philosophical terms.

    8. Historically, access to quality writing supporthas been unevenly distributed, often influenced bythe resources available in the school district. Studentsfrom under-resourced backgrounds have faced sig-nificant disadvantages in this regard. This AI tool,accessible to anyone with an internet connection,offers an opportunity to help level the playing field

      Ai can act as a 24/7 writing tutor for students who don't have access to private tutors or extra help at home.

    9. Interacting with ChatGPT is an iterative pro-cess. You can and should re-prompt it to meet yourgoals and needs. This creates what is known as a chainprompt.

      Writing with Ai is not a one and done task.

    10. How peo-ple choose to use ChatGPT speaks to their intent,”observes Yoder-Wise. “That intent could be to enrichsomeone’s understanding of how better to articulatemessages. . . . The intent could also be to deceive oth-ers in the presentation of written materials”

      Ai is neutral, the morality depends entirely on the writers goals (learning vs cheating).

    Annotators

    1. the responsibilityof fully reading and evaluating the submitted work must always rest solelywith the human reviewer.

      the actual thinking and judgement comes from one, no matter what the Ai says.

    2. I’m particularly concerned with a tendency to “ask AI” aboutwhat it “thinks” about how it should be used or what it “knows” about ourfields of inquiry, as these prompts implicitly treat the AI as a subject withagency

      Never ask an Ai what it thinks, it doesn't have opinions.

    3. Ideally, editorial policies should help authors use these tools in specific,targeted ways that don’t drive out linguistic variation and the richness ofglobal Englishes.

      The goals is not to get rid of Ai but to make it in a way where we can protect peoples creativity and diversity.

    4. Eachprompt is, however, still significantly higher than the cost of a search enginequery (at least before the search engines added AI overviews), and the longerthe ChatGPT output, the more energy it uses (You).

      Every Ai prompt has a real world cost in energy and water.

    5. I would prefer to see AI-produced text to be quotedand cited in the same way that we require recognition and citation of anyother text that an author draws on in a manuscript, regardless of the source.

      Treat Ai like a source. If you didn't write it use quotes.

    6. although this use is, of course, counterto the goals of linguistic justice and ultimatelypushes academic language toward a moreconsistently and generically white male voice.

      While Ai can help people, It will also erase voices by forcing everyone to sound the same.

    7. Perhapsthe most effective use I’ve seen has focused on assessing and improvingorganizational structure.

      Ai is better at looking at the big picture.

    8. They will also make style suggestions to try to makethe text conform to the most common expression, based on its statisticalunderstanding of its training data (which is why much of the output, if notguided by extensive prompts, sounds so very generic)

      Ai will smooth out a unique voice to make it sound like "everyone else". If i use Ai for style I risk losing my personal voice.

    9. In practical terms, this means that authors have an obligation toreview and evaluate any textual output from LLM-based AI systems. Evenwhen being used to help edit work originally produced by an author, thesesystems will correct most grammatical errors—but will also introduce newones in the process. They will also make style suggestions to try to makethe text conform to the most common expression, based on its statisticalunderstanding of its training data (which is why much of the output, if notguided by extensive prompts, sounds so very generic). In short, the outputcan never be trusted and thus must always be reviewed.

      You are responsible for every word in your paper even if Ai suggested it. Reviewing is not optional its an academic requirement.

    10. the system doesn’t “think” and doesn’t “know,” and if it outputsnew text that a human sees as false or incorrect, it’s not a “hallucination”—it’s just doing precisely what it was designed to do.

      "Hallucination" is a misleading word, it sounds human. In reality, the AI is just making a statistical prediction that happened to be wrong.

    11. the system turns all those words and symbols into numbers and then findsthe relationships between and among those data points. When providedinstructions to produce new text, the system uses its statistical model toprovide human-sounding language in response. But, under the hood, it’sconverting everything to numerical values, performing analyses on thosevalues, and then converting the results back into words and parts of words.

      Ai doesn't process meaning, only math. This is why it can't really produce fluent sentences.

    Annotators