28 Matching Annotations
  1. Apr 2022
    1. empty at the absolute center

      Swanson doesn't give much credit to the circumstances these kids were put into. After a freak accident of creating a viral video, every responsible adult figure in their lives (investors, managers, parents, etc.) tells them to keep making entertaining, likeable, unoffensive TikToks to make money because that is what matters, so they buy into that belief system. Does that make them empty?

    2. Jay Laurent

      A lot of name dropping in this sentence. It's hard to tell why he's doing this. Maybe he wants to point out how many influencers there are and how unrecognizable their names are? Or maybe these names being thrown around are just supposed to sound like gossip?

    3. If it ends, it ends,

      This apathy is an interesting conflict between these kids' identities as working professionals on these apps and as people having grown up in a constant state of emergency.

    4. It is raining ash while we play basketball

      It seems like the author keeps pulling in references to the apocalyptic scenes of 2020 and 2021 to show his disdain for this new form of media, comparing it to the apocalypse.

    5. But I confessI’m delighted by the attention and can’t help feeling the palliative glow of immediaterecognition.

      Even the critique loves the attention! This really speaks to how addictive this type of media can be.

    6. poorly these guys were prepared to interact with a journalist.

      It feels like digital media has in many ways digital media influencers act as their own journalists, showing off intimate details about their life. And maybe it's the lack of preparation that makes them so appealing and relatable to young people?

    7. Jay Gatsby out of every intrepid Jimmy Gatz

      Swanson's use of literary and historical references (see "Nero and his fiddle" above) is a clever way of juxtaposing today's media with the media of the past. Anyone who has read The Great Gatsby has dreamt about what they would do if they could live that lavishly, and Swanson is pulling on that feeling to exaggerate TikTok-ers fame and point out the insanity of it.

    8. What ishappening?’ And I finally just told her, ‘I’ve been livestreaming on this app.’

      This sort of justification never had to exist before social media platforms like TikTok. In this case, the mother does not know how her son is spending his time, why people are interested in him for it, or even what the app is called. This is a good example of how new digital media (or at least a significant subset of it) and the influence associated with it is largely used by children.

    9. I’m sorry

      reflects a personal, casual tone, indicative of trends in technology becoming more and more personalized and users showing more and more of the intimate details of their lives.

    1. Brings up interesting ideas about how confident we need to be in our tech to start using it. Maybe more rigorous testing of models before they are used (like in the medical risk score assessment) would help to limit the harm done.

    2. The black box idea of AI is dangerous. Yes, people don't always know exactly what features AI uses and how it weights them to make predictions, but they do know exactly what data is being input and can test how exclusion or inclusion of different types of data affects performance. The "two muslims" example of the GTP-3 writing AI likely occurs because it pulls from a vast array of human writing, meaning the problem is how much of that human writing is referencing muslims and terrorism.

    3. I think this format of journalism is an interesting choice for tackling this subject. It is engaging and catchy, which likely helps it reach more people/a more general audience, but it sacrifices some academic or scientific rigor. Specifically, I am thinking about how much freedom they had to include or exclude parts of their research to highlight whatever points they wanted. Similar to how AI is excluding black and brown faces/voices.

    4. They are using AI to test AI in a YouTube video that is shared and recommended by AI! This really exaggerates how dependent we are on these technologies as well as how reliable or unreliable our answers may be based on the issues with the tech.

    5. Good distinction between intent and impact of algorithms. Very similar to ideas about how we view microaggressions. Maybe this is a new type of digital microaggression?

  2. www.laurenrbeck.com www.laurenrbeck.com
    1. create a digital caste system

      This really highlights how much more interconnected people are with our technology than we think we are. Even with the most sophisticated tech, we cannot stray from our worst forms of social order.

    2. created

      This is a good point about subjectivity vs. objectivity. Even though many of these algorithms are designed to "objectively" identify or represent suspects, we subjectively decide whether that fits into our ideas of justice. Also brings up questions of how confident in our algorithms can we ever be? How much trust are we comfortable with putting into them?

    3. visual dimensions of scientific racism

      Unclear about the exact definition of this term, but it reminds me of racism in medical figures. Many diseases and syndromes in biology and medical textbooks only show examples of how the symptoms appear in white people. Especially for skin disorders, this makes diagnosis on non-white people very difficult!

    4. “A bit of sciencefiction at this point.

      Good to have this perspective! Even though strong correlations can be made between combinations of genetic markers and physical traits, they are very hard to extrapolate to predicting individual cases.

    5. “Latino man, with likely olive skin, brown or hazel eyesand black hair

      Again, another example of human bias being input into supposedly "objective" programs. No person can accurately describes someone's appearance without bias!

    6. humans can be designed better than they currently are

      This all sounds very similar to the "gene therapy" that is becoming more realistic. Applicable (AKA wealthy) participants can sequence several of their embryos and decide which one to keep. But the science says these predictions on an individual basis are totally unreliable!

    7. harness the genomicrevolution

      Similar statements about "harnessing" the power of some brand new technology have been made to justify horrible acts of violence. In my mind, it is much better to wait until more is known about the subject.

    8. machine bias

      Many biases of programs come from the "black box" effect of machine learning algorithms. People claim they cannot be racist because they do not explicitly use race, but most developers are white and use their own biased perspectives to determine what these algorithms do use to make decisions.

    9. they’re achoice

      This connects well to the conversation about racial bias in computer programs. People are quick to hide their biases behind the veil of "objectivity" in technical methods. In response to these biases, many have said that "computer programs are just bias immortalized in code."

    10. innovation produces social containment

      This brings up a lot of interesting ideas about what innovation really is. Is it really innovative if it is within the confines of what is socially accepted/comfortable? Are people open to it if it is outside of that comfort zone?

    11. hey’d like to remind everyone

      This sort of statement clearly is operating to take attention away from the source of the problem. Recognizing people as people is not something that needs to be celebrated, it is the bare minimum. But pointing that out as a success draws attention away from the program's failure to recognize black people.