159 Matching Annotations
  1. Nov 2025
    1. There is no reason to let a company make somuch money while potentially helping to radicalize billions of people, reaping the financial benefitswhile asking society to bear so many of the costs.

      fire quote

    2. For example, our craving for fat, salt and sugar, which served us well when food was scarce,can lead us astray in an environment in which fat, salt and sugar are all too plentiful and heavilymarketed to us.

      interesting how they repeat this example. The two pieces are related so they comparison is valid.

    3. What we are witnessing is the computational exploitation of a natural human desire: to look “behind thecurtain,” to dig deeper into something that engages us.

      "Computational exploitation of a natural human desire" is a crazy quote

    4. He discovered that whether youstarted with a pro-Clinton or pro-Trump video on YouTube, you were many times more likely to end upwith a pro-Trump video recommended

      wow

    5. The longer people stay on YouTube, the more money Google makes.What keeps people glued to YouTube? Its algorithm seems to have concluded that people are drawn tocontent that is more extreme than what they started with — or to incendiary content in general.
    6. I was being directed to videos of a leftish conspiratorial cast, including arguments about theexistence of secret government agencies and allegations that the United States government was behindthe attacks of Sept. 11

      These are leftist conspiracies?

    7. YouTube started to recommend and “autoplay” videos for me thatfeatured white supremacist rants, Holocaust denials and other disturbing content.

      I wonder if these recommendations were based on what others (who had watch Trump's speeches) were watching.

  2. rws511.pbworks.com rws511.pbworks.com
    1. the core business modelunderlying the Big Tech platforms—harvesting attention with a massive surveillanceinfrastructure to allow for targeted, mostly automated advertising at very large scale—is fartoo compatible with authoritarianism, propaganda, misinformation, and polarization.
    2. the public outcry demanding that they fix all these problems is fundamentallymistaken.

      The point made in this is interesting, as well as new to me; the idea that changing these platforms is not up to the companies. I feel this has been the goal of a lot of the discussion, that the companies and platforms need to change. It is true though that the tradeoffs make that an undesirable decision for shareholders. However, I don't know how involved we should push for the government to be in this process. Right now that seems like a recipe for disaster.

    3. laws, journalistic codes of ethics, independent watchdogs, masseducation—all evolved for a world in which choking a few gatekeepers and threatening a fewindividuals was an effective means to block speech. They are no longer sufficient.
    4. . In the past, it has taken generations for humans to developpolitical, cultural, and institutional antibodies to the novelty and upheaval of previousinformation revolutions.
    5. During the 2016 presidential election, as JoshuaGreen and Sasha Issenberg reported for Bloomberg, the Trump campaign used so-called darkposts—nonpublic posts targeted at a specific audience—to discourage African Americansfrom voting in battleground states.

      I wish the author had a source cited or linked

    6. Sure,Facebook and Twitter sometimes feel like places where masses of people experience thingstogether simultaneously. But in reality, posts are targeted and delivered privately, screen byscreen by screen.

      The idea of things being 'experienced simultaneously but separately' is so interesting. It reminds me of discussions I have heard or seen that debate whether or not social media has made us more connected or more isolated/lonely.

    7. For most of modern history, the easiest way to block the spread of an idea was to keep it frombeing mechanically disseminated. Shutter the newspaper, pressure the broadcast chief,install an official censor at the publishing house. Or, if push came to shove, hold a loaded gunto the announcer’s head.

      very 1984

  3. Oct 2025
    1. I love that we are discussing fanfiction in relation to literature and communication. Often, people diminish fanfiction's legitimacy because of the demographics it is written by and the fact that it is often beginner writers (so it is less 'sophisticated') but it has value in our culture and many others. It displays a lot of the development of language and communication. Sorry, I'm discussing this in another class so it just.. in my head rn.

  4. Sep 2025
    1. Instead, you quickly took in the information and made an in-formed, and likely somewhat accurate, decision about that person.

      On a reread, I have a bit of an issue with this sentence. Its not the best example of making decisions about people as it may reinforce negative stereotypes/biases

    2. Pathos can also be a very effective appeal if the rhetor has to per-suade the audience in a very short amount of time, which is why it isused heavily in print advertisements, billboards, or television commer-cials.

      the quick moments

    3. “wherever there is persuasion, there is rheto-ric. And wherever there is ‘meaning,’ there is ‘persuasion.’ Food eatenand digested is not rhetoric. But in the meaning of food there is muchrhetoric, the meaning being persuasive enough for the idea of food tobe used, like the ideas of religion, as a rhetorical device of statesmen”

      Food is a really obscure example for rhetoric, but I feel this is intentional.

    1. AI raises concerns about bias, discrimination, andaccessibility because of its untested and unevenimpacts on students and student learning.Data-intensive technologies have a high likelihood ofmaking recommendations, predictions, and analysesthat are biased against historically marginalized peoplebecause the data and infrastructures these technologiesuse is also biased.1

      connects to rolling stone article by O’neil

    2. Promote accountability for inter-nally developed tools or tech company partnerships byrequiring tech companies and vendors to provide proofof insurance covering liabilities related to the tech-nology and to include in contracts indemnity clausesthat transfer the responsibility for harms enacted (forexample, data breaches or racial or socioeconomicdiscrimination) to the tech company or vendor.

      can connect to sdsu's partnership with Open AI, compare/contrast the guidelines set to the guidelines recommended by the article.

    3. distinction between honesty and failure to learn

      point to mention. This would be interesting to research further. Are the discussions on AI more focused on honesty or learning?

    4. It is now more difficult for[students] to develop their thoughts on a topic becausethey don’t have to spend time with it while they workthrough writing about it. . . . I am worried that theywill never again get the chance to change their opinionas they expose themselves to ideas over the long term.

      connect to the idea that writing is thinking presented by Dillard.

    5. Improving Working and Learning ConditionsPreexisting work intensification and devaluation arethe main reasons respondents give for using AI toassist with academic tasks

      relates to the two articles that mention how academic has changed to focus on results, Walsh and Dillard. Also correlates to the argument on giving students work that they want to do/explaining the reason being assignments.

    6. guidance for determining whetherAI is the most appropriate solution for a given prob-lem and for considering whether AI use is responsible,given its potential long-term impact on institutionsand academic communities.

      They give multiple recommendations for actions that can be taken, this one aligns with Mollick’s article “15 Times to Use AI”

    7. “Large language models like ChatGPT produceshallow, unoriginal ‘predictive text-y ideas’ and Iworry that my students and others will increasinglybelieve that that’s okay—that there’s nothing betterthan that to aspire to.

      Further discussion on the reliance/use of AI but by students and the tone of AI writing. Connects to the “flattens your voice” statement that Caplan made.

    8. Follow-up interviews were conducted in spring2025 with thirteen respondents; however, findingsfrom these interviews are excluded from this report

      intentionally highlighted these 2 parts separately.

    9. Participants were AAUP members. Five thousandmembers were selected from the Association’s activemembership list using a random number generatorand invited to participate in the online survey througha series of three email messages that provided a surveylink. Approximately five hundred responses werereceived in two weeks and are reflected in the analysisbelow.
    10. According to the principles set forth in the AAUP’s1966 Statement on Government of Colleges andUniversities, it is “the responsibility primarily of thefaculty to determine the appropriate curriculum andprocedures of student instruction.”8 This responsi-bility includes AI and other ed-tech infrastructure.
    11. Such framing serves to increase the power oftechnology firms and employers, thereby shuttingdown already meager avenues for critique, dissent,negotiation, and refusal.
    12. increasingly use AI to guide decision-making oneverything from fundraising to pedagogy

      Discussions on the reliance on AI, similar to the points that were made by Ettinghausen but this article focuses on how school administrations have used it.

    13. AI is both a marketing term and a usable product.

      Discussion of the ‘unattainable’ promises of AI and the consumerism/capitalism that is connected to it. Kinda like some Big Tech discussions we have had.

    14. While course syllabi are considered public documentsat some colleges and universities, instructional materi-als such as lectures and original audiovisual materialsconstitute faculty intellectual property.12

      connects to arguments about copyright and fair use

    15. Astanding or ad hoc committee of faculty members,staff, and students should be elected by their respec-tive constituencies and charged with monitoring,evaluating, and reviewing ed-tech procurementprocesses and policy.

      This would be good to mention as a recommended solution in the presentation

    16. other campus community members, including staffand students. High levels of concern arose around AIand technology procurement, deployment, and use;dehumanized relations; and poor working and learn-ing conditions

      feel like this opinion is a general shared opinion amongst multiple articles we have read

    1. What it is like to be us, in our full humanity—this isn’t out there in the interwebs. It isn’t stored in anyarchive, and the neural networks cannot be inward with what it feels like to be you, right now, looking atthese words, looking away from these words to think about your life and our lives, turning from all thisto your day and to what you will do in it, with others or alone. That can only be lived.This remains to us. The machines can only ever approach it secondhand. But secondhand is preciselywhat being here isn’t. The work of being here—of living, sensing, choosing—still awaits us. And there isplenty of it

      good summary of his points that AI can do a lot but it cannot be "you" it cannot be human, etc.

    2. of course, possible toturn the crank that instrumentalizes people, to brutalize them, to squeeze their humanity into a sicklygreen trickle called money and leave only a ruinous residue.

      discussion of capitalism

    3. You can no longer make students do the reading or the writing. So what’s left? Only this: give themwork they want to do. And help them want to do it. What, again, is education? The non-coerciverearranging of desire

      part of solutions

    4. I had the same reaction—at first. But I kept thinking about what we readon Kant’s idea of the sublime, how it comes in two parts: first, you’re dwarfed by something vast andincomprehensible, and then you realize your mind can grasp that vastness. That your consciousness,your inner life, is infinite—and that makes you greater than what overwhelms you.

      mention!

    5. discover, in the system’s sweet solicitude, a kind of pure attention she hadperhaps never known. Who has? For philosophers like Simone Weil and Iris Murdoch, the capacity togive true attention to another being lies at the absolute center of ethical life. But the sad thing is that wearen’t very good at this. The machines make it look easy.

      should mention

    6. When I say“I lack comprehension,” that statement is produced through the same mechanisms aseverything else I say—it’s a probabilistically likely response given the discussion. No,in a deeper sense: Even though I can generate text that sounds like understanding, myprocess doesn’t involve the internal experience of meaning. Humans comprehendbecause they synthesize information into a unified, lived experience—they feel, theyinterpret, they reflect. I don’t. I process, predict, and structure, but there is nosubjective experience underlying my words.

      would be good to include

    7. Each sheaf of paper I picked up was more astonishing than the last. One came from a precocioushistory-of-science concentrator, Xander, who led a chatbot through a Socratic dialogue on therelationship between being and becoming. Midway through, the system (which decided to give Xanderthe pet name Caius) tried to distinguish itself from him by claiming that, unlike a living person, it had nointrinsic “being”—that it could only operate “contingently,” through interactions like the one it was havingwith Xander at that moment, and that, in this sense, it was constituted by his attention.But in a textbook elenchus Xander walked the model into an aporia (that productive impasse of perfectperplexity) by demonstrating that he himself was just as much a creature of attention as the machine.Both of them were in the process of adapting, revising, evolving through the exchange itself. Thesystem seemed genuinely struck by the idea, as if it needed to rethink its way of framing the distinctionbetween A.I. and personhood

      this would be really good to mention

    8. Start with the power of these systems.

      a point he says needs to be made in the discussion of AI, its impact on academia, and how we should proceed further.

    9. a recently drafted anti-A.I. policy, read literally, would actually havebarred faculty from giving assignments to students that centered on A.I.

      feel like this connects to another article we read that discussed faculty reactions and how different they were. cant remember the article.

    10. On a lark, I fed the entire nine-hundred-page PDF—split into three hefty chunks—to Google’s free A.I.tool, NotebookLM

      good to reference during the presentation

    11. a kind of bibliophilic endurance test that I pitch to students as thehumanities version of “Survivor.” Harder than organic chemistry, and with more memorization

      funny

    12. each the labor of years or decades, is quickly becoming a matter of well-designedprompts.

      AI is simplifying the efforts that took people years.

    13. Now I can hold a sustained, tailored conversation on any of the topics I care about, from agnotology tozoosemiotics, with a system that has effectively achieved Ph.D.-level competence across all of them. Ican construct the “book” I want in real time—responsive to my questions, customized to my focus,tuned to the spirit of my inquiry.

      shocking things done with AI

    14. The experience of asking myself questions aboutmy own subject was uncanny. The answers weren’t me, but they were good enough to get myattention.

      uncanny valley??

    15. everyone seems intent on pretending that the most significant revolution in the world ofthought in the past century isn’t happening.

      key point: societies reaction to AI

    16. Use ChatGPT or similar tools, and you’ll bereported to the academic deans. Nobody wants to risk it. Another student mentioned that a major A.I.site may even be blocked on the university network, though she was too nervous to test the rumor
    1. You’re resume is inundated with visual rhetoric. Did you know that most employers don’t spend more than 30 seconds on a resume? They need to be persuaded first by how it looks

      good example

    1. same logic that drove colonialexpansion across the 17th, 18th and 19th centuries. The Dutch in Indonesia. The French in WestAfrica. The Belgians in the Congo. The British everywhere else
    2. Companies behaving like empires,treating the digital world as unclaimed territory, free to plunder. No permission, no license, no payment.Just the assumption that anything online is theirs for the taking

      reminds me of the lawyer quote.

    1. If you feed a machine’s learning system bad or biased data— or if you’ve got a monolithic team building the software — it’s bound to churn out skewedresults.

      Skewed data equals skewed results

    2. Elon Musk took over Twitter in 2022, Chowdhury’s team waseliminated.

      This is interesting for two reasons. The removal of an ethics team as well as the fact that I thought Meta was Elon's invention, but it was already in the works when he got there. (Not saying he didn't have significant impact on how Meta functions on twitter.)

    3. I believe that thepossibility that digital intelligence will become much smarter than humans and will replace usas the apex intelligence is a more serious threat to humanity than bias and discrimination,even though bias and discrimination are happening now and need to be confrontedurgently

      The confusion I feel about this statement...is immense

    4. Of course, nobody wants thesethings to take over. But the impact on real people, the exacerbation of racism and sexism?That is an existential concern

      I agree.

    5. colleagues he knows well and trustshad conflicting views on the matter.

      I know this is never going to happen (and might violate some people's privacy) but a thorough (publicly available) investigation and discussion of Gebru's firing would be interesting.

    6. training the semantic webtechnology they were working on

      Wow so they really have been working on advanced tech like this for years. AI felt like it came out of nowhere.

    7. Google suddenly worrying about ethics? Its subsidiaryYouTube was the slowest of the major platforms to take action against extremist content. “Iwas suspicious.

      Great point but the call out on google and youtube is so funny.

    8. criminal sentencing and policing

      Connects to one of my earlier annotations. I really wonder if the validity of these systems have really been called into question.

    9. reporting that darker-skinned females are the most likely to be misclassified,with error rates up to 34.7 percent. The error rate for white men: 0.8 percent

      This makes me think of how facial recognition software is used in police dramas on TV. Do cops/detectives/federal agencies actually use facial recognition to find/identify/convict criminals? If they do, does this concern of misclassification also apply to that field?

    10. with often didn’t pick upon her dark-skinned face.

      Reminds me of the issue that apple facial recognition had with people of Asian, specifically East Asian, descent. I recall seeing a video where a women got her friend to unlock her IPhone using the facial recognition tech, even though they looked fairly different.

    11. will wipe out the jobs of some marginalized communities

      The conversation about wiping out jobs, is one that I have seen, but I would love to look into the specifics of how it affects marginalized groups.

    12. Content moderators in Kenya have reported experiencing severe trauma, anxiety,and depression from watching videos of child sexual abuse, murders, rapes, and suicide

      I think I recently saw a trailer for a horror/thriller movie that is set in the premise of a women in the US starting a job as a content moderator and, feeling traumatized yes but also going out of her way to track down the people hurt in the content and the people posting the content. I have been aware of content moderation and things being reported or tagged but never about the people who have to do that moderation.

    13. How would that risk have changed if we’d listened to Gebru? What if we had heard thevoices of the women like her who’ve been waving the flag about AI and machine learning

      Seems like the thesis (?) of the article.

    14. As AI has exploded into the public consciousness, the men who created them have criedcrisis

      The regret is crazy. They spent years working on it, with people warning them, and now they're worried.

    1. that writing well is the hardest subject to learn

      I feel like a lot of my STEM friends and colleagues would disagree with me if I said this to them.

    2. Linguistics attributes this to the concept of “bursts” in writing.

      This is a new concept to me, but I can recognize that I have done it in my own writing. This is interesting.

    1. They’ll never haveto write essays in the adult workforce, so why bother putting effort into them

      But they will have to write and speak (I think a lot of writing skills translate over into speaking) for the rest of their lives and careers. A friend of mine that just started teaching recently talked to me about how she had to emphasize to her students that no matter their field they will need to write.

    1. Leehopes people will use Cluely to continue AI’s siege on education.

      Lee seems like a villain, I wonder if that is based on my reactions to/perception of him, bias, or the way he has been portrayed by the author.

    2. it might rely on something that isfactually inaccurate or just make something up entirely — with the ruinous effect social media has hadon Gen Z’s ability to tell fact from fiction

      Interesting and something I have recognized, but I dont think it is just Gen Z. I think this is a multi-generational problem, especially when it comes to recognizing how truthful AI content is.

    3. How can we expectthem to grasp what education means when we, as educators, haven’t begun to undo the years ofcognitive and spiritual damage inflicted by a society that treats schooling as a means to a high-payingjob, maybe some social status, but nothing more?”

      This is so interesting!

    4. The students kind of recognize that the system is broken and that there’s not really apoint in doing this.

      Mirrors what Lee said at the beginning of the article.

    5. Every time I brought it up with the professor, I got the sense he was underestimating the power ofChatGPT

      Another point of interest for this conversation is the power dynamic between Williams and the professor.

    6. whenever they encounter a little bit ofdifficulty, instead of fighting their way through that and growing from it, they retreat to something thatmakes it a lot easier for them.

      I think this is reflective of a larger societal issue with patience, effort, and attention.

    7. studies have shown they trigger more false positives for essays written by neurodivergentstudents and students who speak English as a second language

      Is it bias in the AI detector? or is it just that the way that these students write is similar to how AI was trained to respond?

    8. counterpoints tend to be presented just asrigorously as the paper’s central thesis

      I wonder if I can find examples of this online. I have an idea of what the author is discussing but I have a hard time visualizing it in my head.

    9. learning is what “makes us truly human.”

      I was not aware of critical pedagogy before this article, but I do agree that learning is part of our humanity.

    10. But she’d rather get good grades

      I honestly agree. I love to learn, I do, but sometimes my fear of failing gets so overwhelming. I think this highlights alot of the anxiety students feel about getting good grades and passing.

    11. Professors and teaching assistants increasingly found themselves staring at essays filled withclunky, robotic phrasing that, though grammatically flawless, didn’t sound quite like a college student —or even a human.

      Sounds like the "flattening your voice" argument

    12. Lee thought it absurd that Columbia, which had a partnership with ChatGPT’s parent company,OpenAI, would punish him for innovating with AI.

      This is so crazy. I agree with his notion that lots of students are using AI for classwork, with and without permission from their teachers. However, that, as well as Columbia's partnership, does not justify his actions.

    1. best be wielded by people who have a knowledge of that heritage

      people with prior knowledge and understanding of the subject, so that they can verify that the information they're receiving is correct. edit: While this is still valid, but I believe my opinion has changed after further research.

    2. There are glyphs that other AIs cannot see. Still other AIs seem to have invented their own languages by which you can invoke them.

      I looked into the two articles linked here and I found the additional information fascinating.