This is why you are encouraged to write notes as you read
- Oct 2025
-
rws511.pbworks.com rws511.pbworks.com
-
-
This could be interesting to compare to AI use because the struggle to write is what led him to his ideas (which other authors have contended before)
-
What about journaling?
-
Moral reason to read, but not to write
-
The idea of an obligation not only to write to a loyal audience, but also to maintain your own credibility as you write is an interesting motivation to want to continue writing. Usually I assume people write because they have something they want to say.
-
- Sep 2025
-
abramanders.substack.com abramanders.substack.com
-
In this sense, it can help student better connect the why’s, how’s, and what’s of a given learning objective following in ways that align with principles for the universal design for learning.
flattening language?
-
Developing Critical Thinking, Reading, and Composing Skills
This would be interesting to mention as a strength because this is a big argument against AI (that it limits our critical thinking skills)
-
In another scenario, a writer trying to explain the complex concept of "Quantum Mechanics" might explore various analogies or metaphors, such as "a game of chance" or "waves in the ocean." With the assistance of generative AI, the writer can develop a more detailed and fleshed-out exposition of each analogy or metaphor, allowing them to gauge which resonates more intuitively and effectively with their intended audience
example
-
Generating Ideas and Prototypes
connections to other texts
-
Ex: Evaluate and Revise Claims
in text example
-
Students could use AI to identify “knowledge telling” sentences that over-rely on summarizing facts from sources and suggest ways to revise using Bloom’s Taxonomy to make sentences “knowledge transforming.
Okay but like how do they ask AI to do these things for them? What are the prompts they should be using? Is this only allowed after they are sufficient in the skill themself and then use AI for more effiency?
-
This would help students recognize and adapt to various rhetorical situations, fostering a deeper understanding of rhetorical concepts such as the rhetorical situation
How would these be analyzed? We have experience with AI graded sites (PackBack) and we don't wrote well, just well enough to fulfill what the algorithm wants
-
Mollick and Mollick
connection to Mollick?
-
they experience a disconnect between academic literacies and professional communication
Connection to Hao and Pickard (what is "professional", and are the literacies presented by AI only a whitewashed version?)
-
“Learn and use key rhetorical concepts through analyzing and composing a variety of texts …”
Could connect to the text about how the best way to learn is to do it yourself (potential weakness?)
-
This being the case, time lost waiting for feedback, which is to say time lost from writing, can be clearly seen as an impediment to student progress in the writing course.
coming back to efficiency
-
… lack of fluent language generation processes constrains novice writers within short-term working memory capacity, whereas fluent encoding and extensive knowledge allow skilled writers to take advantage of long-term memory resources via long-term working memory
Can connect this to English language learners (how to learn the language and what they are lacking/ why they might use AI)
-
behavior which works to offload cognitive function is of great value to a writer. It should therefore be a central concern of writing scholars.
what kind of cognitive offloading? memory or thinking?
-
the future of writing will be shaped by generative AI tools such as ChatGPT. A report from Goldman Sachs researchers has projected that AI will lead to “significant disruption” with around “300 million full time jobs” and argue that “two-thirds of current jobs are exposed to some degree of AI automation while generative AI could substitute up to a quarter of current work.”
exigence/ why is this important
-
-
rws511.pbworks.com rws511.pbworks.com
-
The empires of the 21st century don’t need the Dutch East Company, or soldiers, or muskets, orsmallpox. They operate through code, unfair contracts and VC prospectus. Where European powersonce laid claim to land, labour and resources, AI companies now lay claim to language, culture andmemory
This analogy kind of reminds me of the analogy used in O'Neil's piece that companies view AI as an arms race, and I feel like the competition between companies with no regard for the consequences is reflected here
-
The truth is, AI and US ‘fair use’ has yet to be truly tested in a US court. There’s no ruling, noprecedent, no legal foundation
Similar concepts to O'Neil
-
-
rws511.pbworks.com rws511.pbworks.com
-
“And if you can’t even function in the digital economy, it’s going to bereally hard for [our languages] to thrive.”
Is digital literacy a solution to language preservation
-
But these models, built by hoovering up large swathes ofthe internet, are also accelerating language loss, in the same way colonization and assimilation policiesdid previously.
Does language loss also mean culture loss?
-
the same communities and countries already impoverished byformer colonial empires.
This is scary to think about, especially when considering how long it took to start to consider reparations
-
-
rws511.pbworks.com rws511.pbworks.com
-
Now imagine getting trapped in that same unhelpful loop whenyou’re trying to get welfare benefits, seek housing, apply for a job, or secure a loan. It’s clearhow the impacts of these systems aren’t evenly felt even if all that garbage is cleaned up.
As someone who has made at least 10 tech support phone calls this week with no help because they were all chat bots, this is terrifying
-
A now-defunct AI recruiting tool created by Amazon taught itself malecandidates were preferable, after being trained on mostly male résumés. Biased data canhave widespread effects that touch the lives of real people.
Maybe there can be a kind of IRB regulation for the data that AI is trained on, but instead of focusing on the ethics of how participants are treated, it reviews the ethics of the data before introducing it to AI
-
Itrelies on a branch of artificial intelligence — statistical machine learning — to recognizepatterns rather than produce new text.
I feel like this relates to some of our other readings about how AI works and why it chooses the words it does
-
“The training data hasbeen shown to have problematic characteristics resulting in models that encode stereotypicaland derogatory associations along gender, race, ethnicity, and disability status,”
I wonder why this wasn't reason enough for Google to start rolling back on their AI initiatives
-
arms race
good analogy
-
he company’sResponsible AI initiative, which looked at the social implications of artificial intelligence —including “generative” AI systems
I wonder how bad the consequences would have to get before google would decide to start recinding their AI
-
-
rws511.pbworks.com rws511.pbworks.com
-
The sentences sound fancy. But just because something sounds fancy doesn’t make it meaningful. Justbecause something sounds obscure doesn’t mean it makes sense.
I feel like this can be one of those biases that AI perpetuates (that the idea of sounding fancy correlates with being smart/ right, but that isn't always the case)
-
Does avoiding lists nowadaysmake me sound less precise?
Does avoiding sounding like AI also flatten writing?
-
-
rws511.pbworks.com rws511.pbworks.com
-
Even if a student will never write so much as an email after she graduates, she needs to be able tofigure out what’s true and what’s not, and how to make sure her thinking is consistent and logical.
Learning how to write isn't just so that you'll know how to write emails or memos after graduation, it is so that you'll be a critically thinking adult
-
AI canspit out 1000 words on the French Revolution more efficiently than a high-school student can, and thatstudent will never have to write anything as an adult, so what’s the point of making them write anessay?
Why do many seem to share the value that efficiency is the most important thing?
-
-
rws511.pbworks.com rws511.pbworks.com
-
The ideal of college as a place of intellectual growth, where students engage with deep, profoundideas, was gone long before ChatGPT. The combination of high costs and a winner-takes-all economyhad already made it feel transactional, a means to an end.
Maybe this is why students don't feel the need to actively avoid AI
-
“They’re using AI because it’s a simplesolution and it’s an easy way for them not to put in time writing essays. And I get it, because I hatedwriting essays when I was in school,”
Motivation to get good grades vs motivation due to efficiency or laziness
-
Wendy provides some background on the class she’s taking before copy-and-pasting her professor’s instructions into the chatbot.
AI being context dependent?
-
“Massive numbers of students are going to emergefrom university with degrees, and into the workforce, who are essentially illiterate,”
It's not like the workforce is checking to make sure all of your work is authentic
-
“My grades were amazing,”
There is a pressure from everyone to have good grades (university, parents, self, future job/ college) so there is always a drive to do well no matter what. How could you change this do reduce the motivation to use AI?
-
Lee thought it absurd that Columbia, which had a partnership with ChatGPT’s parent company,OpenAI, would punish him for innovating with AI.
Kind of sounds like SDSU. How is AI supposed to affect or not affect our work if it is given to us for free by the university and we are encouraged to use it by the university
-
-
www.oneusefulthing.org www.oneusefulthing.org
-
people need to struggle with a topic to succeed
also, people need to struggle to make it worth something
-
editor or curator.
placing the user in the position of an editor instead of a creator
-
can solve some PhD-level problems, but it can be hard to know whether its answers are useful without being an expert yourself.
why use it at this point
-
-
www.oneusefulthing.org www.oneusefulthing.org
-
Our new AIs have been trained on a huge amount of our cultural history, and they are using it to provide us with text and images in response to our queries.
it shows us what we expect to see
-
we still default to what we know well.
AI is only as creative as we are?
-
knowledge of that heritage
what kinds of knowledge about what kinds of heritages? do certain people have better access to these than others? is this where biases come into play?
-
-
-
learn and demonstrate your understanding.
learning and forming opinions by writing them down
-
replicate and amplify biases;
i hope we can study these biases more in class
-
The AI product cannot take responsibility,
I don't know if I completely agree with this because AI presents information as if it is completely true. I agree that the work you submit is your responsibility, but AI isn't just used ot complete homework.
-
Every prompt is wasteful at a time we need to live more sustainably.
Sometimes I wonder why this alone isn't enough reason for people to stop using AI.
-