There is no reason to let a company make somuch money while potentially helping to radicalize billions of people, reaping the financial benefitswhile asking society to bear so many of the costs.
fire quote
There is no reason to let a company make somuch money while potentially helping to radicalize billions of people, reaping the financial benefitswhile asking society to bear so many of the costs.
fire quote
For example, our craving for fat, salt and sugar, which served us well when food was scarce,can lead us astray in an environment in which fat, salt and sugar are all too plentiful and heavilymarketed to us.
interesting how they repeat this example. The two pieces are related so they comparison is valid.
Google racks up the ad sales
this makes me want to uninstall google
What we are witnessing is the computational exploitation of a natural human desire: to look “behind thecurtain,” to dig deeper into something that engages us.
"Computational exploitation of a natural human desire" is a crazy quote
He discovered that whether youstarted with a pro-Clinton or pro-Trump video on YouTube, you were many times more likely to end upwith a pro-Trump video recommended
wow
The longer people stay on YouTube, the more money Google makes.What keeps people glued to YouTube? Its algorithm seems to have concluded that people are drawn tocontent that is more extreme than what they started with — or to incendiary content in general.
YouTube may be one of the most powerful radicalizing instruments of the 21stcentury.
nonpolitical topics. The same basic pattern emerged.
this is interesting
moreand more extreme than the mainstream political fare I had started with
I was being directed to videos of a leftish conspiratorial cast, including arguments about theexistence of secret government agencies and allegations that the United States government was behindthe attacks of Sept. 11
These are leftist conspiracies?
YouTube started to recommend and “autoplay” videos for me thatfeatured white supremacist rants, Holocaust denials and other disturbing content.
I wonder if these recommendations were based on what others (who had watch Trump's speeches) were watching.
the core business modelunderlying the Big Tech platforms—harvesting attention with a massive surveillanceinfrastructure to allow for targeted, mostly automated advertising at very large scale—is fartoo compatible with authoritarianism, propaganda, misinformation, and polarization.
the public outcry demanding that they fix all these problems is fundamentallymistaken.
The point made in this is interesting, as well as new to me; the idea that changing these platforms is not up to the companies. I feel this has been the goal of a lot of the discussion, that the companies and platforms need to change. It is true though that the tradeoffs make that an undesirable decision for shareholders. However, I don't know how involved we should push for the government to be in this process. Right now that seems like a recipe for disaster.
laws, journalistic codes of ethics, independent watchdogs, masseducation—all evolved for a world in which choking a few gatekeepers and threatening a fewindividuals was an effective means to block speech. They are no longer sufficient.
. In the past, it has taken generations for humans to developpolitical, cultural, and institutional antibodies to the novelty and upheaval of previousinformation revolutions.
During the 2016 presidential election, as JoshuaGreen and Sasha Issenberg reported for Bloomberg, the Trump campaign used so-called darkposts—nonpublic posts targeted at a specific audience—to discourage African Americansfrom voting in battleground states.
I wish the author had a source cited or linked
They can also make the bigplatforms a terrible place to interact with other people
This is really interesting
They look like bot-fueled campaigns of trolling and distraction,
Im excited to talk about this in class
Sure,Facebook and Twitter sometimes feel like places where masses of people experience thingstogether simultaneously. But in reality, posts are targeted and delivered privately, screen byscreen by screen.
The idea of things being 'experienced simultaneously but separately' is so interesting. It reminds me of discussions I have heard or seen that debate whether or not social media has made us more connected or more isolated/lonely.
For most of modern history, the easiest way to block the spread of an idea was to keep it frombeing mechanically disseminated. Shutter the newspaper, pressure the broadcast chief,install an official censor at the publishing house. Or, if push came to shove, hold a loaded gunto the announcer’s head.
very 1984
I love that we are discussing fanfiction in relation to literature and communication. Often, people diminish fanfiction's legitimacy because of the demographics it is written by and the fact that it is often beginner writers (so it is less 'sophisticated') but it has value in our culture and many others. It displays a lot of the development of language and communication. Sorry, I'm discussing this in another class so it just.. in my head rn.
halcyon
definition: denoting a period of time in the past that was idyllically happy and peaceful:
Instead, you quickly took in the information and made an in-formed, and likely somewhat accurate, decision about that person.
On a reread, I have a bit of an issue with this sentence. Its not the best example of making decisions about people as it may reinforce negative stereotypes/biases
“The only Homer some kids know is the one who can’t write hisown last name”
That's actually...kind of a bar
Pathos can also be a very effective appeal if the rhetor has to per-suade the audience in a very short amount of time, which is why it isused heavily in print advertisements, billboards, or television commer-cials.
the quick moments
“wherever there is persuasion, there is rheto-ric. And wherever there is ‘meaning,’ there is ‘persuasion.’ Food eatenand digested is not rhetoric. But in the meaning of food there is muchrhetoric, the meaning being persuasive enough for the idea of food tobe used, like the ideas of religion, as a rhetorical device of statesmen”
Food is a really obscure example for rhetoric, but I feel this is intentional.
AI raises concerns about bias, discrimination, andaccessibility because of its untested and unevenimpacts on students and student learning.Data-intensive technologies have a high likelihood ofmaking recommendations, predictions, and analysesthat are biased against historically marginalized peoplebecause the data and infrastructures these technologiesuse is also biased.1
connects to rolling stone article by O’neil
Promote accountability for inter-nally developed tools or tech company partnerships byrequiring tech companies and vendors to provide proofof insurance covering liabilities related to the tech-nology and to include in contracts indemnity clausesthat transfer the responsibility for harms enacted (forexample, data breaches or racial or socioeconomicdiscrimination) to the tech company or vendor.
can connect to sdsu's partnership with Open AI, compare/contrast the guidelines set to the guidelines recommended by the article.
implementation of ed-tech, including AI, isconnected to long-standing inequities in higher educa-tion.
distinction between honesty and failure to learn
point to mention. This would be interesting to research further. Are the discussions on AI more focused on honesty or learning?
It is now more difficult for[students] to develop their thoughts on a topic becausethey don’t have to spend time with it while they workthrough writing about it. . . . I am worried that theywill never again get the chance to change their opinionas they expose themselves to ideas over the long term.
connect to the idea that writing is thinking presented by Dillard.
Improving Working and Learning ConditionsPreexisting work intensification and devaluation arethe main reasons respondents give for using AI toassist with academic tasks
relates to the two articles that mention how academic has changed to focus on results, Walsh and Dillard. Also correlates to the argument on giving students work that they want to do/explaining the reason being assignments.
morefaculty involvement in determining how AI and techgenerally are used.”
guidance for determining whetherAI is the most appropriate solution for a given prob-lem and for considering whether AI use is responsible,given its potential long-term impact on institutionsand academic communities.
They give multiple recommendations for actions that can be taken, this one aligns with Mollick’s article “15 Times to Use AI”
“Large language models like ChatGPT produceshallow, unoriginal ‘predictive text-y ideas’ and Iworry that my students and others will increasinglybelieve that that’s okay—that there’s nothing betterthan that to aspire to.
Further discussion on the reliance/use of AI but by students and the tone of AI writing. Connects to the “flattens your voice” statement that Caplan made.
Follow-up interviews were conducted in spring2025 with thirteen respondents; however, findingsfrom these interviews are excluded from this report
intentionally highlighted these 2 parts separately.
Participants were AAUP members. Five thousandmembers were selected from the Association’s activemembership list using a random number generatorand invited to participate in the online survey througha series of three email messages that provided a surveylink. Approximately five hundred responses werereceived in two weeks and are reflected in the analysisbelow.
the committeeadministered the national AAUP Survey on AI and theProfession in December 2024.
According to the principles set forth in the AAUP’s1966 Statement on Government of Colleges andUniversities, it is “the responsibility primarily of thefaculty to determine the appropriate curriculum andprocedures of student instruction.”8 This responsi-bility includes AI and other ed-tech infrastructure.
are largely excluded from decisionsabout which platforms and products to develop oruse
In many instances, theiruse harms students as well as faculty members andstaff.7
Such framing serves to increase the power oftechnology firms and employers, thereby shuttingdown already meager avenues for critique, dissent,negotiation, and refusal.
increasingly use AI to guide decision-making oneverything from fundraising to pedagogy
Discussions on the reliance on AI, similar to the points that were made by Ettinghausen but this article focuses on how school administrations have used it.
AI is both a marketing term and a usable product.
Discussion of the ‘unattainable’ promises of AI and the consumerism/capitalism that is connected to it. Kinda like some Big Tech discussions we have had.
improve job security and wages asAI is rolled out.
they are also worried about job security.
While course syllabi are considered public documentsat some colleges and universities, instructional materi-als such as lectures and original audiovisual materialsconstitute faculty intellectual property.12
connects to arguments about copyright and fair use
Astanding or ad hoc committee of faculty members,staff, and students should be elected by their respec-tive constituencies and charged with monitoring,evaluating, and reviewing ed-tech procurementprocesses and policy.
This would be good to mention as a recommended solution in the presentation
other campus community members, including staffand students. High levels of concern arose around AIand technology procurement, deployment, and use;dehumanized relations; and poor working and learn-ing conditions
feel like this opinion is a general shared opinion amongst multiple articles we have read
What it is like to be us, in our full humanity—this isn’t out there in the interwebs. It isn’t stored in anyarchive, and the neural networks cannot be inward with what it feels like to be you, right now, looking atthese words, looking away from these words to think about your life and our lives, turning from all thisto your day and to what you will do in it, with others or alone. That can only be lived.This remains to us. The machines can only ever approach it secondhand. But secondhand is preciselywhat being here isn’t. The work of being here—of living, sensing, choosing—still awaits us. And there isplenty of it
good summary of his points that AI can do a lot but it cannot be "you" it cannot be human, etc.
of course, possible toturn the crank that instrumentalizes people, to brutalize them, to squeeze their humanity into a sicklygreen trickle called money and leave only a ruinous residue.
discussion of capitalism
But we’ll need vigilance, and a fighting courag
soluions
humanists reshapedtheir work to mimic scientific inquiry.
can mentions articles that discussed the faults in the education system
factory-style scholarly productivity was never the essence of the humanities.
Mention!
You can no longer make students do the reading or the writing. So what’s left? Only this: give themwork they want to do. And help them want to do it. What, again, is education? The non-coerciverearranging of desire
part of solutions
the long-anticipated awakening ofmachine consciousness. Rather, what we’re entering is a new consciousness of ourselves.
mention!
I had the same reaction—at first. But I kept thinking about what we readon Kant’s idea of the sublime, how it comes in two parts: first, you’re dwarfed by something vast andincomprehensible, and then you realize your mind can grasp that vastness. That your consciousness,your inner life, is infinite—and that makes you greater than what overwhelms you.
mention!
So, is this bad? Should it frighten us?
a point he is making?
discover, in the system’s sweet solicitude, a kind of pure attention she hadperhaps never known. Who has? For philosophers like Simone Weil and Iris Murdoch, the capacity togive true attention to another being lies at the absolute center of ethical life. But the sad thing is that wearen’t very good at this. The machines make it look easy.
should mention
When I say“I lack comprehension,” that statement is produced through the same mechanisms aseverything else I say—it’s a probabilistically likely response given the discussion. No,in a deeper sense: Even though I can generate text that sounds like understanding, myprocess doesn’t involve the internal experience of meaning. Humans comprehendbecause they synthesize information into a unified, lived experience—they feel, theyinterpret, they reflect. I don’t. I process, predict, and structure, but there is nosubjective experience underlying my words.
would be good to include
Each sheaf of paper I picked up was more astonishing than the last. One came from a precocioushistory-of-science concentrator, Xander, who led a chatbot through a Socratic dialogue on therelationship between being and becoming. Midway through, the system (which decided to give Xanderthe pet name Caius) tried to distinguish itself from him by claiming that, unlike a living person, it had nointrinsic “being”—that it could only operate “contingently,” through interactions like the one it was havingwith Xander at that moment, and that, in this sense, it was constituted by his attention.But in a textbook elenchus Xander walked the model into an aporia (that productive impasse of perfectperplexity) by demonstrating that he himself was just as much a creature of attention as the machine.Both of them were in the process of adapting, revising, evolving through the exchange itself. Thesystem seemed genuinely struck by the idea, as if it needed to rethink its way of framing the distinctionbetween A.I. and personhood
this would be really good to mention
Start with the power of these systems.
a point he says needs to be made in the discussion of AI, its impact on academia, and how we should proceed further.
a recently drafted anti-A.I. policy, read literally, would actually havebarred faculty from giving assignments to students that centered on A.I.
feel like this connects to another article we read that discussed faculty reactions and how different they were. cant remember the article.
university’s broader role.
dug into a fiendishly difficult essay
Yes, parts of their conversation were a bit, shall we say, middlebrow. Yes, they fellback on some pedestrian formulations
It churned for five minutes
thirty-two minutes
On a lark, I fed the entire nine-hundred-page PDF—split into three hefty chunks—to Google’s free A.I.tool, NotebookLM
good to reference during the presentation
a kind of bibliophilic endurance test that I pitch to students as thehumanities version of “Survivor.” Harder than organic chemistry, and with more memorization
funny
Another example: I’ve spent the past fifteen years studying the history of laboratory research on humanattention.
good thing to mention
each the labor of years or decades, is quickly becoming a matter of well-designedprompts.
AI is simplifying the efforts that took people years.
Now I can hold a sustained, tailored conversation on any of the topics I care about, from agnotology tozoosemiotics, with a system that has effectively achieved Ph.D.-level competence across all of them. Ican construct the “book” I want in real time—responsive to my questions, customized to my focus,tuned to the spirit of my inquiry.
shocking things done with AI
more than thirty years
across the disciplines of history, philosophy, art, andliterature
about author
This feels to me like pointing at daisies along the train tracks asan actual locomotive screams up from behind.
Increasingly, the machines best us in this way across nearly every subject
In the course of that disappointing lecture, I had a rich exchangewith the system.
The experience of asking myself questions aboutmy own subject was uncanny. The answers weren’t me, but they were good enough to get myattention.
uncanny valley??
“We’ll just tell the kids theycan’t use these tools and carry on as before.” This is, simply, madness.
authors opinion/argument.
everyone seems intent on pretending that the most significant revolution in the world ofthought in the past century isn’t happening.
key point: societies reaction to AI
Let me offer a dispatch from the impact zone
Good quote
It’s not that they’re dishonest; it’s that they’re paralyzed
good quote to use
Use ChatGPT or similar tools, and you’ll bereported to the academic deans. Nobody wants to risk it. Another student mentioned that a major A.I.site may even be blocked on the university network, though she was too nervous to test the rumor
Princeton
my subject is the rise of atechno-scientific understanding of the world, and of ourselves in it.
, targeting it with deep cuts to federal grant funding
even a cure for cancer.
’m a historian of science and technology
author background
You’re resume is inundated with visual rhetoric. Did you know that most employers don’t spend more than 30 seconds on a resume? They need to be persuaded first by how it looks
good example
conclusion about these three things and interpret some kind of meaning
Conclusions/assumptions leads to interpretaion of meaning
That it isn’t built on the backs of human input
extracting resources and culture without consent or compensation, and justifying it all in the name ofprogress.
same logic that drove colonialexpansion across the 17th, 18th and 19th centuries. The Dutch in Indonesia. The French in WestAfrica. The Belgians in the Congo. The British everywhere else
unregulated AI development
I agree that it has been unregulated.
Companies behaving like empires,treating the digital world as unclaimed territory, free to plunder. No permission, no license, no payment.Just the assumption that anything online is theirs for the taking
reminds me of the lawyer quote.
powered by innovation and cleverness
in reference to the environmental impact?
If you feed a machine’s learning system bad or biased data— or if you’ve got a monolithic team building the software — it’s bound to churn out skewedresults.
Skewed data equals skewed results
“Spicy autocorrect,”
I am for sure calling ChatGPT this in conversation more often.
elements of their AI systems areunknowable — like the inner workings of the human mind, only more novel, more dense
interesting
Elon Musk took over Twitter in 2022, Chowdhury’s team waseliminated.
This is interesting for two reasons. The removal of an ethics team as well as the fact that I thought Meta was Elon's invention, but it was already in the works when he got there. (Not saying he didn't have significant impact on how Meta functions on twitter.)
he’s more concerned about his hypothetical than thepresent reality
perfect way to state it, in my opinion.
I believe that thepossibility that digital intelligence will become much smarter than humans and will replace usas the apex intelligence is a more serious threat to humanity than bias and discrimination,even though bias and discrimination are happening now and need to be confrontedurgently
The confusion I feel about this statement...is immense
Of course, nobody wants thesethings to take over. But the impact on real people, the exacerbation of racism and sexism?That is an existential concern
I agree.
colleagues he knows well and trustshad conflicting views on the matter.
I know this is never going to happen (and might violate some people's privacy) but a thorough (publicly available) investigation and discussion of Gebru's firing would be interesting.
search engin
im assuming google
which can include publicly available web data
So the Google Book project.
training the semantic webtechnology they were working on
Wow so they really have been working on advanced tech like this for years. AI felt like it came out of nowhere.
Google suddenly worrying about ethics? Its subsidiaryYouTube was the slowest of the major platforms to take action against extremist content. “Iwas suspicious.
Great point but the call out on google and youtube is so funny.
It was close to midnight that night
The night she was fired (2020) or the response (2021)?
“ignoredtoo much relevant research.”
Hm.
identities of who reviewed and critiqued her paper revealed
fair
banned in the European Union because it was deemed discriminatory and invasive
This is really interesting.
Biased data canhave widespread effects that touch the lives of real people
this is a really good sentence.
taught itself
Taught itself!?
criminal sentencing and policing
Connects to one of my earlier annotations. I really wonder if the validity of these systems have really been called into question.
reporting that darker-skinned females are the most likely to be misclassified,with error rates up to 34.7 percent. The error rate for white men: 0.8 percent
This makes me think of how facial recognition software is used in police dramas on TV. Do cops/detectives/federal agencies actually use facial recognition to find/identify/convict criminals? If they do, does this concern of misclassification also apply to that field?
“The mask worked,”
A mask worked better than an actual persons' face. hm.
with often didn’t pick upon her dark-skinned face.
Reminds me of the issue that apple facial recognition had with people of Asian, specifically East Asian, descent. I recall seeing a video where a women got her friend to unlock her IPhone using the facial recognition tech, even though they looked fairly different.
as little as $1.32an hour to do so
Excuse me??
will wipe out the jobs of some marginalized communities
The conversation about wiping out jobs, is one that I have seen, but I would love to look into the specifics of how it affects marginalized groups.
Content moderators in Kenya have reported experiencing severe trauma, anxiety,and depression from watching videos of child sexual abuse, murders, rapes, and suicide
I think I recently saw a trailer for a horror/thriller movie that is set in the premise of a women in the US starting a job as a content moderator and, feeling traumatized yes but also going out of her way to track down the people hurt in the content and the people posting the content. I have been aware of content moderation and things being reported or tagged but never about the people who have to do that moderation.
How would that risk have changed if we’d listened to Gebru? What if we had heard thevoices of the women like her who’ve been waving the flag about AI and machine learning
Seems like the thesis (?) of the article.
public consciousness
interesting way to describe it
men
This distinction of "the men" is really interesting
As AI has exploded into the public consciousness, the men who created them have criedcrisis
The regret is crazy. They spent years working on it, with people warning them, and now they're worried.
Google has a differentaccount of what happened
Excuse me?? what??
suppressing words
It would be interesting to see a full list of words that they suppressed.
The results were troubling
The results listed in this paragraph are insane.
that writing well is the hardest subject to learn
I feel like a lot of my STEM friends and colleagues would disagree with me if I said this to them.
Linguistics attributes this to the concept of “bursts” in writing.
This is a new concept to me, but I can recognize that I have done it in my own writing. This is interesting.
They’ll never haveto write essays in the adult workforce, so why bother putting effort into them
But they will have to write and speak (I think a lot of writing skills translate over into speaking) for the rest of their lives and careers. A friend of mine that just started teaching recently talked to me about how she had to emphasize to her students that no matter their field they will need to write.
Leehopes people will use Cluely to continue AI’s siege on education.
Lee seems like a villain, I wonder if that is based on my reactions to/perception of him, bias, or the way he has been portrayed by the author.
While Cluely can’t yet deliver real-time answers through people’s glasses
Wouldn't his ad be false advertising then?
“We built Cluely so you never have to think aloneagain,” the company’s manifesto reads.
no words, just -0-
a Stanford dropout
It's interesting that the author thought to include this.
it might rely on something that isfactually inaccurate or just make something up entirely — with the ruinous effect social media has hadon Gen Z’s ability to tell fact from fiction
Interesting and something I have recognized, but I dont think it is just Gen Z. I think this is a multi-generational problem, especially when it comes to recognizing how truthful AI content is.
How can we expectthem to grasp what education means when we, as educators, haven’t begun to undo the years ofcognitive and spiritual damage inflicted by a society that treats schooling as a means to a high-payingjob, maybe some social status, but nothing more?”
This is so interesting!
The students kind of recognize that the system is broken and that there’s not really apoint in doing this.
Mirrors what Lee said at the beginning of the article.
Every time I brought it up with the professor, I got the sense he was underestimating the power ofChatGPT
Another point of interest for this conversation is the power dynamic between Williams and the professor.
whenever they encounter a little bit ofdifficulty, instead of fighting their way through that and growing from it, they retreat to something thatmakes it a lot easier for them.
I think this is reflective of a larger societal issue with patience, effort, and attention.
I then fed a chunk oftext from the Book of Genesis into ZeroGPT and it came back as 93.33 percent AI-generated
thats crazy.
studies have shown they trigger more false positives for essays written by neurodivergentstudents and students who speak English as a second language
Is it bias in the AI detector? or is it just that the way that these students write is similar to how AI was trained to respond?
meaning these are people whonot only didn’t write the paper but also didn’t read their own paper before submitting it.
Interesting
“As an AI, I have been programmed ...”
It's kind of funny that they didn't think to remove this.
counterpoints tend to be presented just asrigorously as the paper’s central thesis
I wonder if I can find examples of this online. I have an idea of what the author is discussing but I have a hard time visualizing it in my head.
learning is what “makes us truly human.”
I was not aware of critical pedagogy before this article, but I do agree that learning is part of our humanity.
But she’d rather get good grades
I honestly agree. I love to learn, I do, but sometimes my fear of failing gets so overwhelming. I think this highlights alot of the anxiety students feel about getting good grades and passing.
“College is just how well I canuse ChatGPT at this point,”
wow
Professors and teaching assistants increasingly found themselves staring at essays filled withclunky, robotic phrasing that, though grammatically flawless, didn’t sound quite like a college student —or even a human.
Sounds like the "flattening your voice" argument
Lee thought it absurd that Columbia, which had a partnership with ChatGPT’s parent company,OpenAI, would punish him for innovating with AI.
This is so crazy. I agree with his notion that lots of students are using AI for classwork, with and without permission from their teachers. However, that, as well as Columbia's partnership, does not justify his actions.
“It’s the best place to meet your co-founder and your wife.
thats crazy
They’re hackable by AI, and I just had no interest in doing them.
wow
best be wielded by people who have a knowledge of that heritage
people with prior knowledge and understanding of the subject, so that they can verify that the information they're receiving is correct. edit: While this is still valid, but I believe my opinion has changed after further research.
coding computers might be more closely related to learning a foreign language
I feel this could relate to the Harari et all article.
And they can manipulate narrative to get the AI to think in the way they want
Reminiscent of the Ettinghausen article.
the more powerful these systems become.
The more you know, they better you can use the system.
There are glyphs that other AIs cannot see. Still other AIs seem to have invented their own languages by which you can invoke them.
I looked into the two articles linked here and I found the additional information fascinating.
science fiction
I wonder if the author would also use the word dystopian?
doesn’t make them useful to novices.
relates to Mollick's "Magic for English Majors"