19 Matching Annotations
  1. Last 7 days
    1. You know, it forces kids to not just live their experience but be nostalgic for their experience while they’re living it, watch people watch them, watch people watch them watch them.

      This reminds me of John Berger's famous quote about how "men watch women, and women watch themselves being watched" in art. It also reminds me of Raymond William's assertion that culture and society are experienced in the habitual past tense. I wonder if the experience of viewing the self as separate from one's body, as a representation, has any ramifications as to the construction of identity.

    2. People historically came to cosmetic surgeons with photos of celebrities whose features they hoped to emulate. Now, they’re coming with edited selfies. They want to bring to life the version of themselves that they curate through apps like FaceTune and Snapchat.

      This section reminds me of baudrillard's description of simulacrum and simulation, which gestures at the ways symbols have overtaken reality in the modern age. Though Baudrillard wrote in the late 20th century, before the advent of the internet as we know it, I feel like the salience with which the internet sits in our daily lives works off this concept. People imagine their appearance through the selfie-cameras of our phones, their personalities as the conglomeration of their interests which exist on YouTube and TikTok. I've my own experience with this, where simulated reality (in my case, a google doc over writing on a piece of paper) becomes the frame of reference through which i imagine the world. I wonder if there are further issues with simulacrum as it comes to recommendation algorithms

  2. Feb 2026
    1. For example, in the case of the simple “show latest posts” algorithm, the best way to get your content seen is to constantly post and repost your content (though if you annoy users too much, it might backfire).

      In my research, I've encountered a phenomenon on TikTok where creators make dozens of the same video and upload them one after another. This is not how YouTube channels tend to work, where they typically spend a lot of time on a longer video and release them with large gaps in between. The different recommendation algorithms incentivize creators to post differently. It makes me wonder if there is an ethical responsibility on the part of the platform to favor one or the other, in the interest of preventing overwork from creators.

    1. Individual analysis focuses on the behavior, bias, and responsibility an individual has, while systemic analysis focuses on the how organizations and rules may have their own behaviors, biases, and responsibility that aren’t necessarily connected to what any individual inside intends.

      I feel like recommendation algorithms are a really interesting aspect of this, in that they are all at once systems to which applying systemic analysis makes sense, and also the result of individual behaviors and actions in a way beyond regular society. The collection of surplus behavioral data complicates the distinction. Recommendation algorithms use our addictive tendencies against us, and it makes me wonder if we should legislate them more as gambling than as information systems.

    1. The following tweet has a video of a soap dispenser that apparently was only designed to work for people with light-colored skin

      This video (which I've seen before) makes me consider how hierarchies get baked not just into the code of the digital world, but the very infrastructure. Its been well documented that the fact that cameras were designed by people with lighter skin made it hard for black people (especially in the age of film rather than digital photography) to find cameras and filmstock that captured their skintones. I wonder if something similar happens with the digital environment, if the lack of coders and designers from the Global South leads to assumptions in digital infrastructure that goes unnoticed and unchallenged.

  3. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Which abilities are expected of people, and therefore what things are considered disabilities, are socially defined [j1]. Different societies and groups of people make different assumptions about what people can do, and so what is considered a disability in one group, might just be “normal” in another.

      This idea of situational disabilities is really interesting, and not one I'd previously considered (even though in hindsight it seems obvious). It makes me think about how our ideas of disability have been shaped since the advent of technology. Because the digital medium inherently privileges sight over all senses, I feel like the internet's widespread adoption has stigmatized blind people more than existed previously, while a disability like hearing loss might be less salient for a populace already used to noise cancellation in public. I wonder what other disabilities have shifted in stigma as a result of the rise of social media.

    1. What incentives do social media companies have to protect privacy?

      I think the only incentive that social media companies have to protect privacy is simply because users desire a service which does that. I feel like this incentive is also weak, as privacy as a resource has been regenotiated over the course of the past thrity years of the internet. Things which previously would have been unthinkable, like uploading SSIDs onto documents through email, are now common (I've had to do so to sign leases multiple times in college). I think its only if the underlying logic of these companies (that being profit and the monopolization of digital existence) changes that an incentive to protect privacy could strengthen.

  4. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. ight to be forgotten

      In my capstone research paper I'm writing this quarter, I place a lot of importance on the reification of thoughts and desires that the collection of behavioral data brings. I feel like there is a huge discomfort in the user of the internet at the prospect of encountering a digital world shaped by their thoughts in a way that they perceive themselves to have little to no control over. The right to be forgotten here resonates with that anxiety and discomfort, and I feel like the US model should definitely adopt something similar. It assumes that anonymity is an inalienable right we have, which is something i also hold to be true.

  5. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Additionally, spam and output from Large Language Models like ChatGPT can flood information spaces (e.g., email, Wikipedia) with nonsense, useless, or false content, making them hard to use or useless.

      this makes me think about how there are issues with how widespread ChatGPT has become in popular culture that some of the data it trains on is content it wrote itself. I wonder if this will become an issue in the future, especially in data science and any research having to do with the digital sphere. Will future research papers about AI have to take into account a certain orobouros character in the AI, in that its consuming itself and tainting its own processes? are there broader ramifications for the internet writ large?

    1. One of the main goals of social media sites is to increase the time users are spending on their social media sites. The more time users spend, the more money the site can get from ads, and also the more power and influence those social media sites have over those users. So social media sites use the data they collect to try and figure out what keeps people using their site, and what can they do to convince those users they need to open it again later.

      I feel like this process results in a specific feeling to usi9ng these platforms which mine user data moreso than is already being mined online. Especially on platforms like TikTok, where the algorithm incorporates so much data in service of retention time, I feel like even if we are not constantly thinking about it, our biology being used in service of retention time leaves us with a feeling of alienation. I wonder if there is any research on this relationship

  6. Jan 2026
    1. Many websites now let you create a profile, form connections, and participate in discussions with other members of the site.

      I think the ramifications of allowing the construction of digital identity almost completely separately from corporeal identity really does feel like one of the hallmarks of our current digital milieu. It makes me think about if there is an alternate digital formation that could have arisen where digital identity was tied to something like a state ID or driver's license, and the freedom (and perceived lack of consequences) of our current internet was different. Would online behavior have shifted? Would trolling or dogpiling have been less common? Would the anxiety and malaise tied to existing on social media have lessened, or increased?

    1. Books and news write-ups had to be copied by hand, so that only the most desired books went “viral” and spread

      I think this is an interesting take on virality in the age before mass information. I feel like it is important context for how ideas spread to acknowledge how institutional power interacted with virality. In the age when books and news were copied by hands, and said scribes who could do the act worked for the nobility or the church, all written western culture was mediated by the structures of power. In the current day, does this imply platforms like tiktok and youtube have a similar cultural niche and influence as the church/nobility? Are the recommendation algorithms the gatekeepers of virality in the same way?

    1. It’s also about which groups get to be part of the design process itself.

      This is an impetus in my own research! My work deals with the ways in which algorithms shape identity construction, and the role they serve in mediating culture. This importance means that the designers of the algorithm have undue influence over the flow of culture and subcultures that exists on these platforms, despite not being participatory in these subcultures nuances and needs. For example, a white coder who is in charge of some textual aspect of the algorithm might not include autocomplete options which reflect the language nuances of black or brown users. This aspect of coding is in my opinion one of the most important concepts related to digital existence today.

    1. Relationship status

      I think this is an interesting data with constraints that shift along cultural axes. While the other data types like age, name, and address can vary from culture to culture (such as characters used to input), the answers will all be relatively similar. People might for example measure their age using different calendars, or by amount of winters experienced, but a full annual cycle is used pretty much worldwide. For relationship status though, there are a lot of variables even in one culture that must be negotiated. Does dating count as different than a relationship? What about new terms like situationship? What about cultures with multiple partners? There's so many value judgements about what counts as a relationship in the act of reifying it in code. It makes me think about the ways in which our cultural ideology is represented in code, more than we often think about.

    1. on International Women’s Day, the bot automatically finds when any of those companies make an official tweet celebrating International Women’s Day and it quote tweets it with the pay gap at that company:

      This is a fascinating idea. I think this is a distillation of the ideal of that fact-checking feature that Meta tried and then abandoned, or the community notes feature on Twitter currently. The ability to add context in real time to an issue which is politically multi-faceted (such as corporations both paying lip service to international women's day while not eradicating the structures of misogyny and patriarchy in their own companies) is something social media ought to be doing, but doesn't. I wonder what factors led to more social media companies not attempting to add or push these features to their platforms. Is it profit? Is it their own political beliefs? Or is it simply not feasible?

    1. “human computers”

      I think this concept of a human computer is very thought provoking in a myriad of ways. My own research pertains to how identity is being reshaped along the boundaries of recommendation algorithms, and I feel like this idea of a human computer has a lot of thematic overlap. I know that the first "computers" were people who manually did computation as their job, but to what extent did their status as essentially beings of code shape their lives? Did they (or do they, in the case of modern human computers) see the world differently than I? What do they notice? Does living so in tune with the digital shape their philosophical framework? I feel like the fact that so much of culture and my own time is mediated by retention-based algorithms drastically shapes my ability to imagine another world different from my own. Does the same hold true for human computers?

    1. They are shocked at being asked. Which means nobody is asking those questions.

      I find this interesting to consider that tech workers are not considering ethics in their development, as their act of digital creation to me is an inherently philosophical endeavor. I wonder how much of this lack of ethical consideration has to do with the profit motive. If the internet were developed exclusively by researchers or government workers who had certain frameworks that they were required to follow (such as certain ethical frameworks), would the result be more conscious of the ethics of their creation? How much does the ethos of "move fast and break things" just abdicate responsibility for these society-shifting inventions?

    1. “existence precedes essence.” That is, things exist first without meaning or value

      This makes me think about how much the older frameworks operated on this assumption. I find it hard to believe that the first time any philosophy assumed an inherent lack of meaning to the world was the 19th century. How many of these frameworks like Confucianism, Taoism, Stoicism, etc also assumed a lack of meaning to the world, and then built their meanings off of that assumption, in order to maintain social cohesion?