36 Matching Annotations
  1. Dec 2023
    1. How have your views on automation and programming changed (or been reinforced)?

      I think I inherently thought of bots, automation, and AI as harmful, but failed to realize that technology like that is used in basically everything we do online. Seemingly harmful aspects of automation like spam bots can also be used for justifiable purposes, like the instances of protesting that we looked at---goes to show that most stories aren't one-sided, and I've learned to better look at both the pros and cons of things like bots and automation.

    2. How have your views on ethics changed (or been reinforced)?

      My perspective on ethics are a lot more nuanced now, with a greater emphasis on understanding multiple ethics frameworks. If I've taken away anything from the online ethical dilemmas we've covered in class, it's that there's rarely a right or simple answer to the issue---when something is suggested which helps the situation, it usually brings a set of new problems to be considered. That's why it's important to research the topic and understand ethics in a more holistic way.

  2. Nov 2023
    1. Meta now has a mission statement of “give people the power to build community and bring the world closer together.” But is this any better?

      I feel like this effectively says the same thing, although with a more hands-off tone. There doesn't seem to be much difference between Zuckerberg's goal of "connecting" and bringing the world "closer together," and the ethics of this whole thing seems like it relies pretty heavily on intent vs actual outcome.

    1. In what ways do you see capitalism, socialism, and other funding models show up in the country you are from or are living in?

      Obviously I think of big corporations like Apple, Google, Microsoft, etc. In the entertainment scene I think of Disney, which in recent years has seemed to sacrifice quality for efficient profit---cranking up prices for parks, low employee wages, market oversaturation. Not to mention their acquisition of other large corporations like Fox. In terms of their largest source of revenue, entertainment and media, they seem to have also cut back on original content in favor for sequels of live action remakes of existing properties. Disney and other corporations also capitalize on things like pride month to sell more merchandise, which to me seems pretty disingenuous.

    1. Pick a situation where someone is being publicly shamed. Who is responsible for accepting or rejecting their apology/repentance?

      The theme of public shaming comes up a lot in sites like YouTube, which is why the term "YouTube apology" is so infamous, often representing a disingenuous apology which seeks to self-victimize or outright deny claims against the person. The biggest and most recent example of this that I've seen is through a YouTuber named Sssniperwolf, who publicly doxed another YouTuber Jacksfilms, after he posted a series of videos criticizing her instances of content theft. Although audiences on both sides are heavily involved in the public perception of this conflict, I think it solely lies in Jacksfilms to accept/reject their apology, as his personal privacy was invaded. Unfortunately, Sssniperwolf issued an apology which was clearly hastily put together and came after a series of posts in which she doubled down on the validity of her actions.

    1. What do you consider to be the most important factors in making an instance of public shaming bad?

      I think public shaming is at its worse when it becomes a violation of privacy. It's one thing to express abusive behavior or stigmatization online, but something like doxing poses a real threat to someone's personal life and physical safety.

    1. How do you think social media platforms should handle crowd harassment? Are there things they should do to reduce it? Should the consider whether harassment is justified in some instances?

      I think a lot of it lies in the boundaries that the site instills in its platform. If they make their community guidelines clear, detailed, and comprehensive with appropriate consequences, people should be able to report instances of harassment, and from there it's just a matter of the site communicating well with its users.

    1. When do you think crowd harassment is justified (or do you think it is never justified)? Do you feel differently about crowd harassment if the target is rich, famous, or powerful (e.g., a politician)? Do you feel differently about crowd harassment depending on what the target has been doing or saying?

      In certain situations, I think crowd harassment can definitely be justified, like in scenarios where marginalized communities are being harassed themselves. As for the rich, famous, or powerful, my feelings surrounding crowd harassment are definitely looser, I inherently find myself less sympathetic because I can't relate to their higher status.

    1. When looking at who contributes in crowdsourcing systems, or with social media in generally, we almost always find that we can split the users into a small group of power users who do the majority of the contributions, and a very large group of lurkers who contribute little to nothing.

      This is interesting, especially since I just wrote a comment under the impression that the contributors for sites like Wikipedia were much larger and equally dispersed. It's hard to say if this is inherently detrimental for something like credibility, as on one hand, there's less range of perspective and knowledge, but on the other, the content is more focused and perhaps less hindered by a jumble of external information.

    1. Wikipedia: Is an online encyclopedia whose content is crowdsourced. Anyone can contribute, just go to an unlocked Wikipedia page and press the edit button. Institutions don’t get special permissions (e.g., it was a scandal when US congressional staff edited Wikipedia pages), and the expectation that editors do not have outside institutional support is intended to encourage more people to contribute.

      Crowdsourced information sites have always been interesting to me, because while incredibly detailed they aren't verified to be completely credible sources. It also has the unique trait however, of crystalizing information from a diverse range of perspectives, knowledge, and lived experience, which I think is also beneficial in its own way.

    1. What support should content moderators have from social media companies and from governments? Do you think there are ways to moderate well that involve less traumatizing of moderators or taking advantage of poor people?

      Higher pay and and easily available resources for support would be a clear start. I also think that posts could be filtered before reaching content moderators, perhaps by using AI to detect evident violations of harmful content.

    1. What would be considered bad actions that need to be moderated? What would be the goals of doing content moderation? How might this look different than current content moderation systems?

      Hierarchy within social media platforms oppose relational ethics, specifically American Indigenous Ethics. The same framework also combats propositional claims not founded in lived experiences, so that could act as another means for moderation.

    1. Are there ways social media sites can be designed to be better for the mental health of its users?

      For me, one of the largest detriments of social media is just how much time it consumes, especially with short-form content and doom scrolling. A few times, I've gotten videos through scrolling telling me to "put the phone down" or take a break from the app, but they're usually just from other users and not from the site itself. For me, this usually works and kind of snaps me out of the scrolling trance, so I think it could be beneficial if more social media sites implemented these sort of notifications.

    1. One of the ways social media can be beneficial to mental health is in finding community

      This is probably the largest benefit of social media for me, especially during quarantine. For example, having a Discord server with a group of friends to talk and play games online. Private spaces like these were good at ensuring online safety and even introduced new connections with people through inviting friends of friends.

    1. “Content going viral is overwhelming, intimidating, exciting, and downright scary..”

      I've seen a lot of stories about people's reflections on going viral, especially subjects of popular memes. Some feel similarly to Roxane Gay, where the attention from virality is emotionally overwhelming. I think this is particularly harmful for children who go viral or gain a lot of traction on social media. Conversely, many people take advantage of the attention. For example, the subject of the "scumbag steve" meme who tried to capitalize on his popularity by building a social media presence around his accidental virality.

    1. When should sources be cited, and how should they be cited?

      I think one of the biggest factors is how transformative the new content is, and how personalized the original work is to the creator. For example, if someone stitched together a new meme format, (say for example, the Drake hotline bling meme) then I don't think there's any incentive to credit whoever posted that original meme as long as the content is changed. Some more harmful examples I've seen are when people add little insight to existing content, or just flat out steal memes and art altogether, many reaction channels on YouTube and TikTok are notorious for this kind of content.

  3. Oct 2023
    1. What responsibilities do you think social media platforms should have in regards to larger social trends?

      I think platforms themselves should hold responsibility for their algorithms, which bye extension, directly influence conversations concerning larger social trends. Online echo chambers certainly amplify polarization in a lot of communities, and in this sense, I think the impact of recommendations outweighs the intent. Maybe sites can become more proficient in distinguishing different types of posts, so that users can receive things like art or memes based on their activity, but opinion-based content is a bit more varied.

    1. What experiences do you have of social media sites making particularly bad recommendations for you?

      My Instagram feed seems really inconsistent with what it recommends me. Sometimes it recommends me topics that I've shown interest in, (even outside of the site) but sometimes I'll click on a post once (for example a random meme) and it'll randomly fill my entire feed with posts in that same meme format.

    1. But in reality, it is much more complicated. Some new technologies make improvements for some people with some disabilities, but other new technologies are continually being made in ways that are not accessible.

      My sister is deaf so accessibility for her has always been really important. I think many sites have had vast improvements over the past few years with captioning and alt-text, especially for sites like Tik-Tok and other short-form content. Although, auto-generated captioning is still an issue a lot of the time on things like YouTube and even streaming services sometimes.

    1. What assumptions do the site and your device make about individuals or groups using social media, which might not be true or might cause problems? List as many as you can think of (bullet points encouraged).

      One thing I noticed about Instagram's accessibility settings is that they seem to have any options for manipulating visuals like scaling color contrast to make photos more visible to people, another thing I've seen is the use of emojis ASCII symbols which don't translate well from text to speech.

    1. Non-User Information: Social Media sites might collect information about people who don’t have accounts, like how Facebook does

      Looked into this a bit because I was curious about how they collect data. Facebook can collect non-user information through installing cookies on browsers if they visit sites with Facebook "like" and "share" buttons.

    1. What incentives to social media companies have to violate privacy?

      Similarly to the previous chapter about data mining, I think social media companies have incentive to violate privacy as a means to make money or gain traction. For example, looking at things shared over dms, and then recommending a user ads based on those messages.

    1. Do you think there is information that could be discovered through data mining that social media companies should seek out (e.g., they can’t make their platform treat people fairly without knowing this)?

      I think data mining can be useful through its automation. Analyzing large sets of data might help identify things like bots and fraud. Although, I think it's also important for big sites to have extreme transparency with their users, and not applying data mining with the sole purpose of generating more money.

    1. How comfortable are you with Google knowing (whether correctly or not) those things about you?

      Definitely strange and a little unnerving to see how social media platforms track information across different services. I looked at my Twitter interests page, and saw specific topics that I've never interacted with on Twitter, but I have on other sites. Also interesting to see the real scope of what Twitter considers when recommending me stuff, as it was a really thorough list, and pretty accurate to what I consume.

    1. Humans are brilliant at finding patterns, and we use pattern recognition to increase the efficiency of our cognitive processing. We also respond to patterns and absorb patterns of speech production and style of dress from the people around us.

      Similar to the idea of code switching discussed in the previous chapter. This aptitude for recognizing patterns is also why it's easy for a troll to disrupt the stability of those practices. Interesting to see how this notion of "trolling" can be used to disrupt structures that may be seen in a negative light, such as the K-Pop protest example. Despite its inherent connotations, many things on social media such as trolls and bots come down to perspective, which is why considering multiple ethical frameworks are beneficial.

    1. Have you witnessed different responses to trolling? What happened in those cases?

      I see trolling most prominently through video games, often in real-time whether it be through chat functions or over voice chat. In these cases, trolls usually antagonize people for amusement (possibly by intentionally doing poorly in a game) or to emphasize their superior skill in some way.

    1. Where do you see parasocial relationships on social media?

      I see parasocial relationships a lot through video-sharing sites like YouTube, TikTok, and especially Twitch, where content creators and their audience are actively engaging with each other in real-time. I see a lot of stories of fans harassing or making creator's uncomfortable because they're disillusioned by their perception of their "relationship," but I've also seen instances of creator's developing parasocial relationships with their audience, like in the recent controversy surrounding YouTuber Colleen Ballinger.

    1. How do you notice yourself changing how you express yourself in different situations, particularly on social media? Do you feel like those changes or expressions are authentic to who you are, do they compromise your authenticity in some way?

      For me, my public image on social media is very easily malleable, which isn't to say it's inauthentic. Sites like Instagram inherently allow people to control how their followers see and perceive them, like posting photos that paint them in a positive light. To me, this isn't inherently inauthentic as long as what's posted isn't a direct misrepresentation or contradiction of yourself. If I sort through a few photos of myself and only post the best one, that's still authentic, but if I photoshop that photo and advertise it as true, I feel like that borders the line.

    1. Fig. 5.7 When Kyle attempted to retweet this article, twitter stopped me to ask if he wanted to read the article first.

      This was interesting to me as I initially viewed "high friction" with a negative connotation, since I associate it with ads or other unwanted elements that detract from the ease of using the platform. However, examples like this show how high friction, in many ways, can have ethically just outcomes. Another example I can think of is "sensitive content" warnings on platforms like Instagram, which blur out images and videos which may be controversial or harmful for some viewers, and thus give the user a preliminary option as to whether they want to view it or not.

    2. Look at the different ethics frameworks and see which ones might have something to say about those different ways of forming connections with others.

      I see a lot of possible parallels between social media and many relational ethics such as Ethics of Care, Ubuntu, or American Indigenous Ethics. The focus on humanity, lived experience, and investment in relationships (even with strangers) sounds a lot like the types of inherent connections we create on social media. Although, like we saw with the "Antisocial Media" page, those ethical frameworks might not always be so applicable in practice.

    1. This can be especially important when there is a strong social trend to overlook certain data. Such trends, which philosophers call ‘pernicious ignorance’, enable us to overlook inconvenient bits of data to make our utility calculus easier or more likely to turn out in favor of a preferred course of action.

      The notion of considering all data sets and perspectives reminds me of our discussion about different ethical frameworks and our exercise in considering each one when approaching an ethical dilemma. It's interesting that utilitarianism seems to be the most logical framework to approach data as opposed to something like Taoism, which might be less relevant in considering data because it's not as quantitatively focused as utilitarianism. Just an example of how different frameworks can be applied to different situations.

    1. The time the image/sound/video was created The location where the image/sound/video was taken The type of camera or recording device used to create the image/sound/video etc.

      This explanation/example of metadata helped a lot with my understanding. I've always heard the term thrown around, especially in things like video games where people might go into the game's metadata to extract certain information and secrets that the developers stored within it. I have a better grasp now of what that actually means in the context of data as a more holistic concept.

    1. with only 21.9% of tweets analyzed about the movie being negative in the first place.

      This was actually surprising for me because around the time that this movie came out, I had a perception that more closely mirrored the false narrative that the politically motivated humans or bots were pushing. I thought that the negative backlash towards this film far outweighed the positive or neutral feedback on Twitter specifically, so it's interesting to see that the situation was far less one-sided, goes to show how much bots can influence public perception and engagement of a certain topic.

    1. “Most useful Instagram bots”

      I found an Instagram growth tool called "Kicksta" which for a username target, will like a couple photos of each of that username's followers on your account's behalf. Essentially it's a way to use automation to increase followers by targeting users who seem interested in your content. I'm curious about the conversations about ethics and privacy surrounding this as well as social media algorithms in general.

    1. What motivated Twitter users to put time and energy into this?

      When a social media story like this reaches this point of publicity, I feel like it's in the nature of many people to contribute and inflate the situation to a point of arguable absurdity (taking photos of her, finding flight numbers, the hashtags) almost because it's funny, or at least gratifying to be "in on the joke" and gaining the attention and solidarity of many other people communally agreeing that someone is morally wrong. Delivering justice to situations like this also makes people feel morally upright in comparison, encouraging more people to double down on the punishments, I see this a lot on social media through things like cancel culture.

    1. Jean-Paul Sartre, 1900s France

      I find existentialism and Jean-Paul Sartre's interpretation of it really interesting. Similarly to what's summarized here, Sartre poses that "existentialism is humanism" which dictates that without a "God" there is no set framework for the ideal, morally perfect human. Thus, we must create and define purpose ourselves. In one of his most famous works Huis Clos, Sartre emphasizes that without a higher power, concepts such as hell are constituted by those we surround ourselves with, and those who argue their own moral judgement. It's a notion that (like nihilism) comes off as inherently pessimistic, but also one that I find uplifting in some ways, particularly in the idea that we are solely responsible for creating value and meaning in the lives of ourselves and others.