everyone seems intent on pretending that the most significant revolution in the world ofthought in the past century isn’t happening.
key point: societies reaction to AI
everyone seems intent on pretending that the most significant revolution in the world ofthought in the past century isn’t happening.
key point: societies reaction to AI
Let me offer a dispatch from the impact zone
Good quote
It’s not that they’re dishonest; it’s that they’re paralyzed
good quote to use
Use ChatGPT or similar tools, and you’ll bereported to the academic deans. Nobody wants to risk it. Another student mentioned that a major A.I.site may even be blocked on the university network, though she was too nervous to test the rumor
Princeton
my subject is the rise of atechno-scientific understanding of the world, and of ourselves in it.
, targeting it with deep cuts to federal grant funding
even a cure for cancer.
’m a historian of science and technology
author background
You’re resume is inundated with visual rhetoric. Did you know that most employers don’t spend more than 30 seconds on a resume? They need to be persuaded first by how it looks
good example
conclusion about these three things and interpret some kind of meaning
Conclusions/assumptions leads to interpretaion of meaning
That it isn’t built on the backs of human input
extracting resources and culture without consent or compensation, and justifying it all in the name ofprogress.
same logic that drove colonialexpansion across the 17th, 18th and 19th centuries. The Dutch in Indonesia. The French in WestAfrica. The Belgians in the Congo. The British everywhere else
unregulated AI development
I agree that it has been unregulated.
Companies behaving like empires,treating the digital world as unclaimed territory, free to plunder. No permission, no license, no payment.Just the assumption that anything online is theirs for the taking
reminds me of the lawyer quote.
powered by innovation and cleverness
in reference to the environmental impact?
If you feed a machine’s learning system bad or biased data— or if you’ve got a monolithic team building the software — it’s bound to churn out skewedresults.
Skewed data equals skewed results
“Spicy autocorrect,”
I am for sure calling ChatGPT this in conversation more often.
elements of their AI systems areunknowable — like the inner workings of the human mind, only more novel, more dense
interesting
Elon Musk took over Twitter in 2022, Chowdhury’s team waseliminated.
This is interesting for two reasons. The removal of an ethics team as well as the fact that I thought Meta was Elon's invention, but it was already in the works when he got there. (Not saying he didn't have significant impact on how Meta functions on twitter.)
he’s more concerned about his hypothetical than thepresent reality
perfect way to state it, in my opinion.
I believe that thepossibility that digital intelligence will become much smarter than humans and will replace usas the apex intelligence is a more serious threat to humanity than bias and discrimination,even though bias and discrimination are happening now and need to be confrontedurgently
The confusion I feel about this statement...is immense
Of course, nobody wants thesethings to take over. But the impact on real people, the exacerbation of racism and sexism?That is an existential concern
I agree.
colleagues he knows well and trustshad conflicting views on the matter.
I know this is never going to happen (and might violate some people's privacy) but a thorough (publicly available) investigation and discussion of Gebru's firing would be interesting.
search engin
im assuming google
which can include publicly available web data
So the Google Book project.
training the semantic webtechnology they were working on
Wow so they really have been working on advanced tech like this for years. AI felt like it came out of nowhere.
Google suddenly worrying about ethics? Its subsidiaryYouTube was the slowest of the major platforms to take action against extremist content. “Iwas suspicious.
Great point but the call out on google and youtube is so funny.
It was close to midnight that night
The night she was fired (2020) or the response (2021)?
“ignoredtoo much relevant research.”
Hm.
identities of who reviewed and critiqued her paper revealed
fair
banned in the European Union because it was deemed discriminatory and invasive
This is really interesting.
Biased data canhave widespread effects that touch the lives of real people
this is a really good sentence.
taught itself
Taught itself!?
criminal sentencing and policing
Connects to one of my earlier annotations. I really wonder if the validity of these systems have really been called into question.
reporting that darker-skinned females are the most likely to be misclassified,with error rates up to 34.7 percent. The error rate for white men: 0.8 percent
This makes me think of how facial recognition software is used in police dramas on TV. Do cops/detectives/federal agencies actually use facial recognition to find/identify/convict criminals? If they do, does this concern of misclassification also apply to that field?
“The mask worked,”
A mask worked better than an actual persons' face. hm.
with often didn’t pick upon her dark-skinned face.
Reminds me of the issue that apple facial recognition had with people of Asian, specifically East Asian, descent. I recall seeing a video where a women got her friend to unlock her IPhone using the facial recognition tech, even though they looked fairly different.
as little as $1.32an hour to do so
Excuse me??
will wipe out the jobs of some marginalized communities
The conversation about wiping out jobs, is one that I have seen, but I would love to look into the specifics of how it affects marginalized groups.
Content moderators in Kenya have reported experiencing severe trauma, anxiety,and depression from watching videos of child sexual abuse, murders, rapes, and suicide
I think I recently saw a trailer for a horror/thriller movie that is set in the premise of a women in the US starting a job as a content moderator and, feeling traumatized yes but also going out of her way to track down the people hurt in the content and the people posting the content. I have been aware of content moderation and things being reported or tagged but never about the people who have to do that moderation.
How would that risk have changed if we’d listened to Gebru? What if we had heard thevoices of the women like her who’ve been waving the flag about AI and machine learning
Seems like the thesis (?) of the article.
public consciousness
interesting way to describe it
men
This distinction of "the men" is really interesting
As AI has exploded into the public consciousness, the men who created them have criedcrisis
The regret is crazy. They spent years working on it, with people warning them, and now they're worried.
Google has a differentaccount of what happened
Excuse me?? what??
suppressing words
It would be interesting to see a full list of words that they suppressed.
The results were troubling
The results listed in this paragraph are insane.
that writing well is the hardest subject to learn
I feel like a lot of my STEM friends and colleagues would disagree with me if I said this to them.
Linguistics attributes this to the concept of “bursts” in writing.
This is a new concept to me, but I can recognize that I have done it in my own writing. This is interesting.
They’ll never haveto write essays in the adult workforce, so why bother putting effort into them
But they will have to write and speak (I think a lot of writing skills translate over into speaking) for the rest of their lives and careers. A friend of mine that just started teaching recently talked to me about how she had to emphasize to her students that no matter their field they will need to write.
Leehopes people will use Cluely to continue AI’s siege on education.
Lee seems like a villain, I wonder if that is based on my reactions to/perception of him, bias, or the way he has been portrayed by the author.
While Cluely can’t yet deliver real-time answers through people’s glasses
Wouldn't his ad be false advertising then?
“We built Cluely so you never have to think aloneagain,” the company’s manifesto reads.
no words, just -0-
a Stanford dropout
It's interesting that the author thought to include this.
it might rely on something that isfactually inaccurate or just make something up entirely — with the ruinous effect social media has hadon Gen Z’s ability to tell fact from fiction
Interesting and something I have recognized, but I dont think it is just Gen Z. I think this is a multi-generational problem, especially when it comes to recognizing how truthful AI content is.
How can we expectthem to grasp what education means when we, as educators, haven’t begun to undo the years ofcognitive and spiritual damage inflicted by a society that treats schooling as a means to a high-payingjob, maybe some social status, but nothing more?”
This is so interesting!
The students kind of recognize that the system is broken and that there’s not really apoint in doing this.
Mirrors what Lee said at the beginning of the article.
Every time I brought it up with the professor, I got the sense he was underestimating the power ofChatGPT
Another point of interest for this conversation is the power dynamic between Williams and the professor.
whenever they encounter a little bit ofdifficulty, instead of fighting their way through that and growing from it, they retreat to something thatmakes it a lot easier for them.
I think this is reflective of a larger societal issue with patience, effort, and attention.
I then fed a chunk oftext from the Book of Genesis into ZeroGPT and it came back as 93.33 percent AI-generated
thats crazy.
studies have shown they trigger more false positives for essays written by neurodivergentstudents and students who speak English as a second language
Is it bias in the AI detector? or is it just that the way that these students write is similar to how AI was trained to respond?
meaning these are people whonot only didn’t write the paper but also didn’t read their own paper before submitting it.
Interesting
“As an AI, I have been programmed ...”
It's kind of funny that they didn't think to remove this.
counterpoints tend to be presented just asrigorously as the paper’s central thesis
I wonder if I can find examples of this online. I have an idea of what the author is discussing but I have a hard time visualizing it in my head.
learning is what “makes us truly human.”
I was not aware of critical pedagogy before this article, but I do agree that learning is part of our humanity.
But she’d rather get good grades
I honestly agree. I love to learn, I do, but sometimes my fear of failing gets so overwhelming. I think this highlights alot of the anxiety students feel about getting good grades and passing.
“College is just how well I canuse ChatGPT at this point,”
wow
Professors and teaching assistants increasingly found themselves staring at essays filled withclunky, robotic phrasing that, though grammatically flawless, didn’t sound quite like a college student —or even a human.
Sounds like the "flattening your voice" argument
Lee thought it absurd that Columbia, which had a partnership with ChatGPT’s parent company,OpenAI, would punish him for innovating with AI.
This is so crazy. I agree with his notion that lots of students are using AI for classwork, with and without permission from their teachers. However, that, as well as Columbia's partnership, does not justify his actions.
“It’s the best place to meet your co-founder and your wife.
thats crazy
They’re hackable by AI, and I just had no interest in doing them.
wow
best be wielded by people who have a knowledge of that heritage
people with prior knowledge and understanding of the subject, so that they can verify that the information they're receiving is correct. edit: While this is still valid, but I believe my opinion has changed after further research.
coding computers might be more closely related to learning a foreign language
I feel this could relate to the Harari et all article.
And they can manipulate narrative to get the AI to think in the way they want
Reminiscent of the Ettinghausen article.
the more powerful these systems become.
The more you know, they better you can use the system.
There are glyphs that other AIs cannot see. Still other AIs seem to have invented their own languages by which you can invoke them.
I looked into the two articles linked here and I found the additional information fascinating.
science fiction
I wonder if the author would also use the word dystopian?
doesn’t make them useful to novices.
relates to Mollick's "Magic for English Majors"