280 Matching Annotations
  1. Sep 2025
    1. everyone seems intent on pretending that the most significant revolution in the world ofthought in the past century isn’t happening.

      key point: societies reaction to AI

    2. Use ChatGPT or similar tools, and you’ll bereported to the academic deans. Nobody wants to risk it. Another student mentioned that a major A.I.site may even be blocked on the university network, though she was too nervous to test the rumor
    1. You’re resume is inundated with visual rhetoric. Did you know that most employers don’t spend more than 30 seconds on a resume? They need to be persuaded first by how it looks

      good example

    1. same logic that drove colonialexpansion across the 17th, 18th and 19th centuries. The Dutch in Indonesia. The French in WestAfrica. The Belgians in the Congo. The British everywhere else
    2. Companies behaving like empires,treating the digital world as unclaimed territory, free to plunder. No permission, no license, no payment.Just the assumption that anything online is theirs for the taking

      reminds me of the lawyer quote.

    1. If you feed a machine’s learning system bad or biased data— or if you’ve got a monolithic team building the software — it’s bound to churn out skewedresults.

      Skewed data equals skewed results

    2. Elon Musk took over Twitter in 2022, Chowdhury’s team waseliminated.

      This is interesting for two reasons. The removal of an ethics team as well as the fact that I thought Meta was Elon's invention, but it was already in the works when he got there. (Not saying he didn't have significant impact on how Meta functions on twitter.)

    3. I believe that thepossibility that digital intelligence will become much smarter than humans and will replace usas the apex intelligence is a more serious threat to humanity than bias and discrimination,even though bias and discrimination are happening now and need to be confrontedurgently

      The confusion I feel about this statement...is immense

    4. Of course, nobody wants thesethings to take over. But the impact on real people, the exacerbation of racism and sexism?That is an existential concern

      I agree.

    5. colleagues he knows well and trustshad conflicting views on the matter.

      I know this is never going to happen (and might violate some people's privacy) but a thorough (publicly available) investigation and discussion of Gebru's firing would be interesting.

    6. training the semantic webtechnology they were working on

      Wow so they really have been working on advanced tech like this for years. AI felt like it came out of nowhere.

    7. Google suddenly worrying about ethics? Its subsidiaryYouTube was the slowest of the major platforms to take action against extremist content. “Iwas suspicious.

      Great point but the call out on google and youtube is so funny.

    8. criminal sentencing and policing

      Connects to one of my earlier annotations. I really wonder if the validity of these systems have really been called into question.

    9. reporting that darker-skinned females are the most likely to be misclassified,with error rates up to 34.7 percent. The error rate for white men: 0.8 percent

      This makes me think of how facial recognition software is used in police dramas on TV. Do cops/detectives/federal agencies actually use facial recognition to find/identify/convict criminals? If they do, does this concern of misclassification also apply to that field?

    10. with often didn’t pick upon her dark-skinned face.

      Reminds me of the issue that apple facial recognition had with people of Asian, specifically East Asian, descent. I recall seeing a video where a women got her friend to unlock her IPhone using the facial recognition tech, even though they looked fairly different.

    11. will wipe out the jobs of some marginalized communities

      The conversation about wiping out jobs, is one that I have seen, but I would love to look into the specifics of how it affects marginalized groups.

    12. Content moderators in Kenya have reported experiencing severe trauma, anxiety,and depression from watching videos of child sexual abuse, murders, rapes, and suicide

      I think I recently saw a trailer for a horror/thriller movie that is set in the premise of a women in the US starting a job as a content moderator and, feeling traumatized yes but also going out of her way to track down the people hurt in the content and the people posting the content. I have been aware of content moderation and things being reported or tagged but never about the people who have to do that moderation.

    13. How would that risk have changed if we’d listened to Gebru? What if we had heard thevoices of the women like her who’ve been waving the flag about AI and machine learning

      Seems like the thesis (?) of the article.

    14. As AI has exploded into the public consciousness, the men who created them have criedcrisis

      The regret is crazy. They spent years working on it, with people warning them, and now they're worried.

    1. that writing well is the hardest subject to learn

      I feel like a lot of my STEM friends and colleagues would disagree with me if I said this to them.

    2. Linguistics attributes this to the concept of “bursts” in writing.

      This is a new concept to me, but I can recognize that I have done it in my own writing. This is interesting.

    1. They’ll never haveto write essays in the adult workforce, so why bother putting effort into them

      But they will have to write and speak (I think a lot of writing skills translate over into speaking) for the rest of their lives and careers. A friend of mine that just started teaching recently talked to me about how she had to emphasize to her students that no matter their field they will need to write.

    1. Leehopes people will use Cluely to continue AI’s siege on education.

      Lee seems like a villain, I wonder if that is based on my reactions to/perception of him, bias, or the way he has been portrayed by the author.

    2. it might rely on something that isfactually inaccurate or just make something up entirely — with the ruinous effect social media has hadon Gen Z’s ability to tell fact from fiction

      Interesting and something I have recognized, but I dont think it is just Gen Z. I think this is a multi-generational problem, especially when it comes to recognizing how truthful AI content is.

    3. How can we expectthem to grasp what education means when we, as educators, haven’t begun to undo the years ofcognitive and spiritual damage inflicted by a society that treats schooling as a means to a high-payingjob, maybe some social status, but nothing more?”

      This is so interesting!

    4. The students kind of recognize that the system is broken and that there’s not really apoint in doing this.

      Mirrors what Lee said at the beginning of the article.

    5. Every time I brought it up with the professor, I got the sense he was underestimating the power ofChatGPT

      Another point of interest for this conversation is the power dynamic between Williams and the professor.

    6. whenever they encounter a little bit ofdifficulty, instead of fighting their way through that and growing from it, they retreat to something thatmakes it a lot easier for them.

      I think this is reflective of a larger societal issue with patience, effort, and attention.

    7. studies have shown they trigger more false positives for essays written by neurodivergentstudents and students who speak English as a second language

      Is it bias in the AI detector? or is it just that the way that these students write is similar to how AI was trained to respond?

    8. counterpoints tend to be presented just asrigorously as the paper’s central thesis

      I wonder if I can find examples of this online. I have an idea of what the author is discussing but I have a hard time visualizing it in my head.

    9. learning is what “makes us truly human.”

      I was not aware of critical pedagogy before this article, but I do agree that learning is part of our humanity.

    10. But she’d rather get good grades

      I honestly agree. I love to learn, I do, but sometimes my fear of failing gets so overwhelming. I think this highlights alot of the anxiety students feel about getting good grades and passing.

    11. Professors and teaching assistants increasingly found themselves staring at essays filled withclunky, robotic phrasing that, though grammatically flawless, didn’t sound quite like a college student —or even a human.

      Sounds like the "flattening your voice" argument

    12. Lee thought it absurd that Columbia, which had a partnership with ChatGPT’s parent company,OpenAI, would punish him for innovating with AI.

      This is so crazy. I agree with his notion that lots of students are using AI for classwork, with and without permission from their teachers. However, that, as well as Columbia's partnership, does not justify his actions.

    1. best be wielded by people who have a knowledge of that heritage

      people with prior knowledge and understanding of the subject, so that they can verify that the information they're receiving is correct. edit: While this is still valid, but I believe my opinion has changed after further research.

    2. There are glyphs that other AIs cannot see. Still other AIs seem to have invented their own languages by which you can invoke them.

      I looked into the two articles linked here and I found the additional information fascinating.