17 Matching Annotations
  1. Last 7 days
  2. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Wikipedia:Paid-contribution disclosure. November 2023. Page Version ID: 1184161032. URL: https://en.wikipedia.org/w/index.php?title=Wikipedia:Paid-contribution_disclosure&oldid=1184161032 (visited on 2023-12-08).

      I never knew people could get paid to make changes to wikipedia but it makes sense. Wikipedia being one of the largest free information databases in the world means that its data gets pulled for so much so changing that data by paying editors to make sure your pages are up to date and paint you favorably help influence those who might see the data without knowing their seeing it.

    1. Governments might also have rules about content moderation and censorship, such as laws in the US against CSAM. China additionally censors various news stories in their country, like stories about protests. In addition to banning news on their platforms, in late 2022 China took advantage of Elon Musk having fired almost all Twitter content moderators to hide news of protests by flooding Twitter with spam and porn [n10].

      It is interesting to think how China is able to have such a robust censorship network blocking the news as well as a bunch of different sites but the US seems unable to block a lot of the things we censor such as CSAM. I wonder what sets China's censorship network apart from the US's.

  3. May 2026
  4. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Zack Whittaker. Facebook won't let you opt out of its phone number 'look up' setting. TechCrunch, March 2019. URL: https://techcrunch.com/2019/03/03/facebook-phone-number-look-up/ (visited on 2023-12-07).

      It’s weird enough that we as consumers/users aren’t able to control what the company does with our data, but going beyond that to not being able to control how other users search your data is a very scary reality. Facebook’s algorithm of trying to build out networks that go beyond just people you have met in real life to recommending people you might like far exceeds what I feel any of these social media should have been able to do.

    1. There are concerns that echo chambers increase polarization, where groups lose common ground and ability to communicate with each other. In some ways echo chambers are the opposite of context collapse, where contexts are created and prevented from collapsing. Though others have argued [k16] that people do interact across these echo chambers, but the contentious nature of their interactions increases polarization.

      I would agree that the culture that has been created online has fostered echo chambers as opposed to fostering a culture of having people talk to others with opposing views. I wonder how much of the status quo being made into echo chambers is based on social media sites emphasizing group subculture formation, such as Reddit with subreddits or Discord with channels.

  5. Apr 2026
  6. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Kurt Wagner. This is how Facebook collects data on you even if you don’t have an account. Vox, April 2018. URL: https://www.vox.com/2018/4/20/17254312/facebook-shadow-profiles-data-collection-non-users-mark-zuckerberg (visited on 2023-12-05).

      This is really interesting to me because my parents personally don’t use Facebook, but a restaurant that they like to announces its specials only on Facebook, so they are forced to use it, making it especially weird to think that Facebook has built a marketing profile on them just based on looking at what a restaurant is offering that day.

  7. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Datasets can be poisoned unintentionally. For example, many scientists posted online surveys that people can get paid to take. Getting useful results depended on a wide range of people taking them. But when one TikToker’s video about taking them went viral, the surveys got filled out with mostly one narrow demographic, preventing many of the datasets from being used as intended.

      It’s kinda scary to think that potentially influential studies are being either stunted by responses that skew the good data that would've been collected, or the data is getting published with these faulty results. This is especially scary when you look at the research that affects policy decisions and how those are now affected by this poisoned data.

  8. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Mia Jankowicz. A TikToker said he wrote code to flood Kellogg with bogus job applications after the company announced it would permanently replace striking workers. Business Insider, December 2021. URL: https://www.businessinsider.com/tiktoker-wrote-code-spam-kellogg-strike-busting-job-ad-site-2021-12 (visited on 2023-12-05).

      I love types of rebellion like this because I feel that it often creates more change, and it gives the company less of the ability to put the blame on the protestors and use that as a reason to suppress them or trespass on them.

    1. There is a reason why stereotypes are so tenacious: they work… sort of. Humans are brilliant at finding patterns, and we use pattern recognition to increase the efficiency of our cognitive processing. We also respond to patterns and absorb patterns of speech production and style of dress from the people around us. We do have a tendency to display elements of our history and identity, even if we have never thought about it before. This creates an issue, however, when the stereotype is not apt in some way. This might be because we diverge in some way from the categories that mark us, so the stereotype is inaccurate. Or this might be because the stereotype also encodes value judgments that are unwarranted, and which lead to problems with implicit bias. Some people do not need to think loads about how they present in order to come across to people in ways that are accurate and supportive of who they really are. Some people think very carefully about how they curate a set of signals that enable them to accurately let people know who they are or to conceal who they are from people outside their squad.

      It is interesting to think about how stereotyping has changed since becoming more online. You would have thought that classical stereotypes would be mitigated because you aren’t seeing the person on the other side of the screen.

  9. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Text analysis of Trump's tweets confirms he writes only theAndroid half was published on. Text analysis of Trump's tweets confirms he writes only the (angrier) Android half. August 2016. URL: http://varianceexplained.org/r/trump-tweets/ (visited on 2023-11-24).

      This is a really funny analysis one to see who is making his tweets percentage of staff vs him influencing it. Also I feel him using an android as opposed to an Iphone as usually I would assume him and hids staff would use the same types of devices.

    1. Since we have different personas and ways of behaving in different groups of people, what happens if different groups of people are observing you at the same time? For example, someone might not know how to behave if they were at a restaurant with their friends and they noticed that their parents were seated at the table next to them. This is phenomenon is called “context collapse [f31].”

      Its interesting to think that the people we see online daily and that these people who influence potentially millions of viewers are acting in some way. I feel like that doesnt get translated over media well because we assume a sense of authenticity online compared to on tv or other mediums

  10. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Julia Evans. Examples of floating point problems. January 2023. URL: https://jvns.ca/blog/2023/01/13/examples-of-floating-point-problems/ (visited on 2023-11-24).

      The author talks about the different use cases of floating points as well as explaining what they are. Floating point numbers are numbers that contain decimals, similar to integers with the exception that they can be longer and tend to be more precise forms of keeping track of numerical data; floats are often used in more complex math operations.

    1. “Design justice is a framework for analysis of how design distributes benefits and burdens between various groups of people. Design justice focuses explicitly on the ways that design reproduces and/or challenges the matrix of domination (white supremacy, heteropatriarchy, capitalism, ableism, settler colonialism, and other forms of structural inequality).” It’s also about which groups get to be part of the design process itself.

      Design justice is an interesting concept. I feel we usually forget how the users/those having their data collected are impacted by the data collection. Also it makes me wonder how harmful is the data companies collect on us, and subsequently do we have an expectation of data privacy whenusing companies platforms.

  11. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Steven Tweedie. This disturbing image of a Chinese worker with close to 100 iPhones reveals how App Store rankings can be manipulated. February 2015. URL: https://www.businessinsider.com/photo-shows-how-fake-app-store-rankings-are-made-2015-2 (visited on 2024-03-07).

      It is interesting to see how an app store manipulation farm is still running partially on people's actions. It is interesting to think about that when we have been able to automate so many complex tasks but for whatever reason it seems to be more efficient to stil use real people in a freezing room.

    1. Bots, on the other hand, will do actions through social media accounts and can appear to be like any other user. The bot might be the only thing posting to the account, or human users might sometimes use a bot to post for them. Note that sometimes people use “bots” to mean inauthentically run accounts, such as those run by actual humans, but are paid to post things like advertisements or political content. We will not consider those to be bots, since they aren’t run by a computer. Though we might consider these to be run by “human computers” who are following the instructions given to them, such as in a click farm:

      It interesting to see how bots are so similar to users. It is especially scary to think about this when you look at how most platforms don't have bot tags or even if they do when people intentionally don't label accounts as bots for malice intent.I have never really considered these accounts that are run by humans that post junk and spam to be bots but I can see how people would consider a bot account anything that spams them.

    1. Natural Rights

      Natural Rights - All people are born with the rights to life liberty and property, and these rights should be protected.

      Intervene - An example of intervening would be protecting someones natural rights. An example could be intervening by placing a parent/family member in a a full time health care home.

      Not Intervene - Not making that decision for your parent/family member. Honoring there right to make that decision for themselves bc it is their natural right to make decision about their life.

    1. Confucius, Analects 15.23 [b9] (~500 BCE China) “There is nothing dearer to man than himself; therefore, as it is the same thing that is dear to you and to others, hurt not others with what pains yourself.”

      I didn't know that so many philosophers had written about the golden rule in some way all being a little different but largely carrying the same intent and information. It's also interesting you can see how someones cultural background would change how they say the golden rule; I can tell having grown up in a school that very much emphasized western values I know the golden rule as it is said in Matthews 7:12, but I Imagine people growing up with influences from different cultures have a different way of sayin it.

    1. “We’re not making it for that reason but the way ppl choose to use it isn’t our fault. Safeguard will develop.” But tech is moving so fast. That there is no way humanity or laws can keep up. We don’t even know how to deal with open death threats online. Only “Can we do this?” Never “should we do this? We’ve seen that same blasé attitude in how Twitter or Facebook deal w abuse/fake news.

      I find his comments on this to be ver true. It often seems very little consideration is put in on the long twrm affects on humanity as a whole from developing and releasing these technologies. A prime example of this is accessible AI image generation. When the company X made there AI image bot lack many of the standard sfty gaurd rails of the modern image generation algorithms they developed something that does real harm to people daily without consideration for all it will harm.