- Last 7 days
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Mark Zuckerberg here has put himself in the position of a “White Savior” who has come to fix the problems of people all over the world by giving them the Internet. But we can question whether his plan is a good one. First: do users want the connection that Mark Zuckerberg is offering? The answer is at least in part yes, as people have signed up for the Internet through Zuckerberg’s program, and many are excited to access resources and be connected to the online world like everyone else. Second: is connecting everyone a good thing? The answer to this is not necessarily yes. The 1979 comedic sci-Fi novel The Hitchhiker’s Guide to the Galaxy, mocks the idea of the good of connecting everyone: [I]f you stick a Babel fish in your ear you can instantly understand anything said to you in any form of language. […] Meanwhile, the poor Babel fish, by effectively removing all barriers to communication between different races and cultures, has caused more and bloodier wars than anything else in the history of creation.
Mark Zuckerberg calling himself the White savior seems extremely interesting and it also highlights the consequences and challenges of tech expansion. While this plan of his may seem Noble and kind, it may also be questioned as he may be doing this only for monetary gain.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Meta’s way of making profits fits in a category called Surveillance Capitalism. Surveillance capitalism began when internet companies started tracking user behavior data to make their sites more personally tailored to users. These companies realized that this data was something that they could profit from, so they began to collect more data than strictly necessary (“behavioral surplus”) and see what more they could predict about users. Companies could then sell this data about users directly, or (more commonly), they could keep their data hidden, but use it to sell targeted advertisements. So, for example, Meta might let an advertiser say they want an ad to only go to people likely to be pregnant. Or they might let advertizes make ads go only to “Jew Haters” (which is ethically very bad, and something Meta allowed).
Meta’s approach to generate profit using surveillance capitalism raises various critical ethical concerns, as it monetizes behavioral data in ways that can create harmful biases and practices. This paragraph also highlights the relationship between innovative data that is used for personalization and its potential for exploitation and societal harm.
-
- Nov 2024
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Jennifer Jacquet argues that shame can be morally good as a tool the weak can use against the strong: The real power of shame is it can scale. It can work against entire countries and can be used by the weak against the strong. Guilt, on the other hand, because it operates entirely within individual psychology, doesn’t scale. […] We still care about individual rights and protection. Transgressions that have a clear impact on broader society – like environmental pollution – and transgressions for which there is no obvious formal route to punishment are, for instance, more amenable to its use. It should be reserved for bad behaviour that affects most or all of us. […] A good rule of thumb is to go after groups, but I don’t exempt individuals, especially not if they are politically powerful or sizeably impact society. But we must ask ourselves about the way those individuals are shamed and whether the punishment is proportional. Jennifer Jacquet: ‘The power of shame is that it can be used by the weak against the strong’
Jennifer Jacquet suggests that shame can be a powerful moral tool, particularly when used by marginalized groups to hold the powerful accountable for actions that harm society. By focusing on collective people who have done wrong, shame can serve as a form of public accountability where formal punishment may be lacking.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
While public criticism and shaming have always been a part of human culture, the Internet and social media have created new ways of doing so. We’ve seen examples of this before with Justine Sacco and with crowd harassment (particularly dogpiling). For an example of public shaming, we can look at late-night TV host Jimmy Kimmel’s annual Halloween prank, where he has parents film their children as they tell the parents tell the children that the parents ate all the kids’ Halloween candy. Parents post these videos online, where viewers are intended to laugh at the distress, despair, and sense of betrayal the children express. I will not link to these videos which I find horrible, but instead link you to these articles: Jimmy Kimmel’s Halloween prank can scar children. Why are we laughing? (archived copy) Jimmy Kimmel’s Halloween Candy Prank: Harmful Parenting? We can also consider events in the #MeToo movement as at least in part public shaming of sexual harassers (but also of course solidarity and organizing of victims of sexual harassment, and pushes for larger political, organizational, and social changes).
The internet has given a platform to public criticism and shaming, making it easier for moments of distress to be shared widely and quickly. While some see humor in pranks like Jimmy Kimmel’s Halloween candy prank, these actions can also have lasting impacts on the individuals involved.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
While anyone is vulnerable to harassment online (and offline as well), some people and groups are much more prone to harassment, particularly marginalized and oppressed people in a society. Historically of course, different demographic groups have been subject to harassment or violence, such as women, LGBTA+ people, and Black people (e.g., the FBI trying to convince Martin Luther King Jr. to commit suicide). On social media this is true as well. For example, the last section mentioned the (partially bot-driven) harassment campaign against Meghan Markle and Prince Henry was at least partially driven by Meghan Markle being Black (the same racism shown in the British Press). When Amnesty International looked at online harassment, they found that: Women of colour, (black, Asian, Latinx and mixed-race women) were 34% more likely to be mentioned in abusive or problematic tweets than white women. Black women were disproportionately targeted, being 84% more likely than white women to be mentioned in abusive or problemat
This reflection highlights how certain groups, like women of color, are more likely to face online harassment. It also points out that racism is often a big factor in this, as seen with Meghan Markle's experience, showing how important it is to pay attention to these issues.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
17.2. Crowd Harassment# Harassment can also be done through crowds. Crowd harassment has also always been a part of culture, such as riots, mob violence, revolts, revolution, government persecution, etc. Social media then allows new ways for crowd harassment to occur. Crowd harassment includes all the forms of individual harassment we already mentioned (like bullying, stalking, etc.), but done by a group of people. Additionally, we can consider the following forms of crowd harassment: Dogpiling: When a crowd of people targets or harasses the same person. Public Shaming (this will be our next chapter) Cross-platform raids (e.g., 4chan group planning harassment on another platform) Stochastic terrorism The use of mass public communication, usually against a particular individual or group, which incites or inspires acts of terrorism which are statistically probable but happen seemingly at random. See also: An atmosphere of violence: Stochastic terror in American politics In addition, fake crowds (e.g., bots or people paid to post) can participate in crowd harassment. For example:
This paragraph shows how crowd harassment has evolved with technology, showing that while it has always existed in forms like riots or persecution, social media amplifies its reach and intensity. The mention of "fake crowds," like bots or paid participants, shows a side of online harassment, showing how support can contribute to toxic behavior.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
When looking at who contributes in crowdsourcing systems, or with social media in generally, we almost always find that we can split the users into a small group of power users who do the majority of the contributions, and a very large group of lurkers who contribute little to nothing. For example, Nearly All of Wikipedia Is Written By Just 1 Percent of Its Editors, and on StackOverflow “A 2013 study has found that 75% of users only ask one question, 65% only answer one question, and only 8% of users answer more than 5 questions..” We see the same phenomenon on Twitter: Fig. 16.3 Summary of Twitter use by Pew Research Center# This small percentage of people doing most of the work in some areas is not a new phenomenon. In many aspects of our lives, some tasks have been done by a small group of people with specialization or resources. Their work is then shared with others. This goes back many thousands of years with activities such as collecting obsidian and making jewelry, to more modern activities like writing books, building cars, reporting on news, and making movies.
I think that crowdsourcing and social media platforms highlight a pattern where a small group produces the majority of content or contributions, while the majority participate minimally in the creation and interactions. This highlights the role of "power users" who drive content generation and how it can play a major role in the content and the social media platform overall.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Sometimes even well-intentioned efforts can do significant harm. For example, in the immediate aftermath of the 2013 Boston Marathon bombing, FBI released a security photo of one of the bombers and asked for tips. A group of Reddit users decided to try to identify the bomber(s) themselves. They quickly settled on a missing man (Sunil Tripathi) as the culprit (it turned out had died by suicide and was in no way related to the case), and flooded the Facebook page set up to search for Sunil Tripathi, causing his family unnecessary pain and difficulty. The person who set up the “Find Boston Bomber” Reddit board said “It Was a Disaster” but “Incredible”, and Reddit apologized for online Boston ‘witch hunt’.
I believe this highlights the potential that well-meaning actions have to cause harm when they are not carefully considered or managed. The Reddit users’ attempts to help identify the bomber led to significant distress for Sunil Tripathi’s family, which ultimately caused this action to have no respect for the family and led to harm
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Reddit is divided into subreddits which are often about a specific topic. Each subreddit is moderated by volunteers who have special permissions, who Reddit forbids from making any money: Reddit is valued at more than ten billion dollars, yet it is extremely dependent on mods who work for absolutely nothing. Should they be paid, and does this lead to power-tripping mods? A post starting a discussion thread on reddit about reddit In addition to the subreddit moderators, all Reddit users can upvote or downvote comments and posts. The reddit recommendation algorithm promotes posts based on the upvotes and downvotes, and comments that get too many downvotes get automatically hidden. Finally, Reddit itself does some moderation as a platform in determining which subreddits can exist and has on occasion shut down some.
This raises important questions about Reddit’s reliance on unpaid moderators despite its high valuation and whether this volunteer system may contribute to issues like moderator power dynamics and content bias. The structure allows both community-driven and algorithmic moderation, but Reddit’s direct intervention in banning subreddits also underscores the platform’s ultimate control over community content.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Governments might also have rules about content moderation and censorship, such as laws in the US against Child Sexual Abuse Material (CSAM). China additionally censors various news stories in their country, like stories about protests. In addition to banning news on their platforms, in late 2022 China took advantage of Elon Musk having fired almost all Twitter content moderators to hide news of protests by flooding Twitter with spam and porn.
This highlights the power of government influence over digital platforms and the different approaches countries take to control information. While some censorship, like blocking harmful content, is broadly supported, other cases—such as China’s strategic flood of misinformation—illustrate how censorship can also be a tool for controling public awareness
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
ince social media platforms can gather so much data on their users, they can try to use data mining to figure out information about their users’ moods, mental health problems, or neurotypes (e.g., ADHD, Autism). For example, Facebook has a suicide detection algorithm, where they try to intervene if they think a user is suicidal (Inside Facebook’s suicide algorithm: Here’s how the company uses artificial intelligence to predict your mental state from your posts). As social media companies have tried to detect talk of suicide and sometimes remove content that mentions it, users have found ways of getting around this by inventing new word uses, like “unalive.” Larger efforts at trying to determine emotions or mental health through things like social media use, or iPhone or iWatch use, have had very questionable results, and any claims of being able to detect emotions reliably are probably false. Additionally, these attempts at detecting mental health can be part of violating privacy or can be used for unethical surveillance, such as: your employer might detect that you are unhappy, and consider firing you since they think you might not be fully committed to the job someone might build a system that tries to detect who is Autistic, and then force them into an abusive therapy system to try and “cure” them of their Autism (see also this more scientific explanation of that linked article)
Even though the data mining that occurs on social media sites can provide important insights for early interventions and detections, it also raises major questions about privacy and ethical concerns. Efforts to use data mining to detect things such as emotions may ultimately lead to misuse, discrimination, and harmful interventions which can underscore the need for ethical standards in these tech practices.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
One of the ways social media can be beneficial to mental health is in finding community (at least if it is a healthy one, and not toxic like in the last section). For example, if you are bullied at school (and by classmates on some social media platform), you might find a different online community online that supports you. Or take the example of Professor Casey Fiesler finding a community that shared her interests (see also her article): So you might find a safe space online to explore part of yourself that isn’t safe in public (e.g., Trans Twitter and the beauty of online anonymity). Or you might find places to share or learn about mental health (in fact, from seeing social media posts, Kyle realized that ADHD was causing many more problems in his life than just having trouble sitting still, and he sought diagnosis and treatment). There are also support groups for various issues people might be struggling with, like ADHD, or having been raised by narcissistic parents.
I believe that social media can play a positive role in helping improve an individual's mental health as it can provide a supportive community where individuals can feel understood and accepted. These communities can help people explore different personal identities, gain insights on their mental health, and connect with other individuals who face similar situations and challenges.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Content is sometimes shared without modification fitting the original intention, but let’s look at ones where there is some sort of modification that aligns with the original intention. We’ll include several examples on this page from the TikTok Duet feature, which allows people to build off the original video by recording a video of themselves to play at the same time next to the original. So for example, This tweet thread of TikTok videos (cross-posted to Twitter) starts with one Tiktok user singing a short parody musical of an argument in a grocery store. The subsequent tweets in the thread build on the prior versions, first where someone adds themselves singing the other half of the argument, then where someone adds themselves singing the part of their child, then where someone adds themselves singing the part of an employee working at the store1: This thread is evidence of the way TikTok’s duet feature can result in the most hilarious and creative collaborations.Pretty much a guy wrote a musical number about a grocery store and everyone is adding onto it and I am deceased. Part 1 pic.twitter.com/4z5Mqbscgp— Emma Lynn (@emmaspacelynn) October 5, 2020
I really like how this example illustrates how the Duet Feature on Tiktok allows users to to build upon content that already exists while also allowing users to use creativity to add to these posts. Which ultimately fosters a collaborative environment. This grocery store thread on twitter highlights how users add various layers to original videos, enhancing the collective experience to engage a broader audience on these social media platforms.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
For social media content, replication means that the content (or a copy or modified version) gets seen by more people. Additionally, when a modified version gets distributed, future replications of that version will include the modification (a.k.a., inheritance). There are ways of duplicating that are built into social media platforms: Actions such as: liking, reposting, replying, and paid promotion get the original posting to show up for users more Actions like quote tweeting, or the TikTok Duet feature let people see the original content, but modified with new context. Social media sites also provide ways of embedding posts in other places, like in news articles There are also ways of replicating social media content that aren’t directly built into the social media platform, such as: copying images or text and reposting them yourself taking screenshots, and cross-posting to different sites
In my opinion, this section highlights and talks about how specific social media platforms can stop and facilitate content replication with features like reposting, quoting, and embedding, which ultimately can inherit changes. It can also highlight and find what replication methods users are utilizing such as screenshots and cross-posting which ultimately extends a post beyond the social media platforms’ controls.
-
- Oct 2024
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Knowing that there is a recommendation algorithm, users of the platform will try to do things to make the recommendation algorithm amplify their content. This is particularly important for people who make their money from social media content. For example, in the case of the simple “show latest posts” algorithm, the best way to get your content seen is to constantly post and repost your content (though if you annoy users too much, it might backfire). Other strategies include things like: Clickbait: trying to give you a mystery you have to click to find the answer to (e.g., “You won’t believe what happened when this person tried to eat a stapler!”). They do this to boost clicks on their link, which they hope boosts them in the recommendation algorithm, and gets their ads more views Trolling: by provoking reactions, they hope to boost their content more Coordinated actions: have many accounts (possibly including bots) like a post, or many people use a hashtag, or have people trade positive reviews Youtuber F.D. Signifier explores the YouTube recommendation algorithm and interviews various people about their experiences (particularly Black Youtubers like himself) in this video (it’s very long, so we’ll put some key quotes below):
I personally believe that recommendation algorithms on sites like Youtube influences and forces influencers to modify their behavior so that they can gain visibility on the site, especially influencers who depend on these sites for income and their life. This ultimately creates pressure to use strategies like clickbait to manipulate how many people interact with the content and content quality to pursue higher income from these sites.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Individual analysis focuses on the behavior, bias, and responsibility an individual has, while systemic analysis focuses on the how organizations and rules may have their own behaviors, biases, and responsibility that aren’t necessarily connected to what any individual inside intends. For example, there were differences in US criminal sentencing guidelines between crack cocaine vs. powder cocaine in the 90s. The guidelines suggested harsher sentences on the version of cocaine more commonly used by Black people, and lighter sentences on the version of cocaine more commonly used by white people. Therefore, when these guidelines were followed, they had have racially biased (that is, racist) outcomes regardless of intent or bias of the individual judges. (See: https://en.wikipedia.org/wiki/Fair_Sentencing_Act).
This paragraph highlights the idea of how systemic biases can ultimately lead to discrimination, even when these individual actors may not have biased intentions. This ultimately shows the overall importance of examining policies and guidelines, as they can perpetuate inequality in ways that an individual alone may overlook and not care about.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
A disability is an ability that a person doesn’t have, but that their society expects them to have.1 For example: If a building only has staircases to get up to the second floor (it was built assuming everyone could walk up stairs), then someone who cannot get up stairs has a disability in that situation. If a physical picture book was made with the assumption that people would be able to see the pictures, then someone who cannot see has a disability in that situation. If tall grocery store shelves were made with the assumption that people would be able to reach them, then people who are short, or who can’t lift their arms up, or who can’t stand up, all would have a disability in that situation. If an airplane seat was designed with little leg room, assuming people’s legs wouldn’t be too long, then someone who is very tall, or who has difficulty bending their legs would have a disability in that situation. Which abilities are expected of people, and therefore what things are considered disabilities, are socially defined. Different societies and groups of people make different assumptions about what people can do, and so what is considered a disability in one group, might just be “normal” in another.
This paragraph highlights that disabilities are often socially defined, arising from assumptions about what abilities people are expected to have in specific environments. By acknowledging that these expectations vary across societies, it highlights the role of social norms in helping disabling individuals based on their diverse abilities.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Those with disabilities often find ways to cope with their disability, that is, find ways to work around difficulties they encounter and seek out places and strategies that work for them (whether realizing they have a disability or not). Additionally, people with disabilities might change their behavior (whether intentionally or not) to hide the fact that they have a disability, which is called masking and may take a mental or physical toll on the person masking, which others around them won’t realize. For example, kids who are nearsighted and don’t realize their ability to see is different from other kids will often seek out seats at the front of classrooms where they can see better. As for us two authors, we both have ADHD and were drawn to PhD programs where our tendency to hyperfocus on following our curiosity was rewarded (though executive dysfunction with finishing projects created challenges)1. This way of managing disabilities puts the burden fully on disabled people to manage their disability in a world that was not designed for them, trying to fit in with “normal” people.
This highlights how people with disabilities often adapt to their environment in ways that help them cope, whether through finding effective strategies or masking to blend in, which can be exhausting. By placing the burden of adaptation on individuals, society reinforces an inequitable system that overlooks the need for inclusive design and broader support.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
We want to provide you, the reader, a chance to explore online privacy more. In this activity, you will be looking at an official brochure on the EU’s GDPR privacy law1. We will again follow the five-step CIDER method (Critique, Imagine, Design, Expand, Repeat). So read through the official brochure on the EU’s GDPR privacy law (for this activity ignore any additional details or clarifications made elsewhere in the GDPR, since those weren’t deemed important enough to put on this brochure). Then do the following (preferably on paper or in a blank computer document):
This framework allows for a way to critically examine online privacy through the EU’s GDPR framework using the CIDER method, encouraging an inclusive perspective on user assumptions and biases. By engaging in each step of the method, people can develop a deeper understanding of privacy law implications and the diverse needs of digital users.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
While we have our concerns about the privacy of our information, we often share it with social media platforms under the understanding that they will hold that information securely. But social media companies often fail at keeping our information secure. For example, the proper security practice for storing user passwords is to use a special individual encryption process for each individual password. This way the database can only confirm that a password was the right one, but it can’t independently look up what the password is or even tell if two people used the same password. Therefore if someone had access to the database, the only way to figure out the right password is to use “brute force,” that is, keep guessing passwords until they guess the right one (and each guess takes a lot of time). But while that is the proper security for storing passwords. So for example, Facebook stored millions of Instagram passwords in plain text, meaning the passwords weren’t encrypted and anyone with access to the database could simply read everyone’s passwords. And Adobe encrypted their passwords improperly and then hackers leaked their password database of 153 million users. From a security perspective there are many risks that a company faces, such as: Employees at the company misusing their access, like Facebook employees using their database permissions to stalk women Hackers finding a vulnerability and inserting, modifying, or downloading information. For example: hackers stealing the names, Social Security numbers, and birthdates of 143 million Americans from Equifax hackers posting publicly the phone numbers, names, locations, and some email addresses of 530 million Facebook users, or about 7% of all people on Earth Hacking attempts can be made on individuals, whether because the individual is the goal target, or because the individual works at a company which is the target. Hackers can target individuals with attacks like: Password reuse attacks, where if they find out your password from one site, they try that password on many other sites Hackers tricking a computer into thinking they are another site, for example: the US NSA impersonated Google Social engineering, where they try to gain access to information or locations by tricking people. For example: Phishing attacks, where they make a fake version of a website or app and try to get you to enter your information or password into it. Some people have made malicious QR codes to take you to a phishing site. Many of the actions done by the con-man Frank Abagnale, which were portrayed in the movie Catch Me If You Can One of the things you can do as an individual to better protect yourself against hacking is to enable 2-factor authentication on your accounts.
This passage talks about and highlights the risks and challenges of data security on social media platforms, where even big companies fail to protect users information. It emphasizes the importance of individual responsibility, such as enabling two-factor authentication, to help mitigate risks of potential data breaches and hacking attacks.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
When we’ve been accessing Reddit through Python and the “PRAW” code library. The praw code library works by sending requests across the internet to Reddit, using what is called an “application programming interface” or API for short. APIs have a set of rules for what requests you can make, what happens when you make the request, and what information you can get back. If you are interested in learning more about what you can do with praw and what information you can get back, you can look at the official documentation for those. But be warned they are not organized in a friendly way for newcomers and take some getting used to to figure out what these documentation pages are talking about. So, if you are interested, you can look at the praw library documentation to find out what the library can do (again, not organized in a beginner-friendly way). You can learn a little more by clicking on the praw models and finding a list of the types of data for each of the models, and a list of functions (i.e., actions) you can do with them.
I believe that API's are very important as they like Reddit’s PRAW library, enable structured access to data. While the PRAW documentation offers deeper insights into its capabilities, it may be challenging to navigate, requiring patience and exploration to master its use to ensure that its full potential and purpose is being utilized.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
For example, social media data about who you are friends with might be used to infer your sexual orientation. Social media data might also be used to infer people’s: Race Political leanings Interests Susceptibility to financial scams Being prone to addiction (e.g., gambling) Additionally, groups keep trying to re-invent old debunked pseudo-scientific (and racist) methods of judging people based on facial features (size of nose, chin, forehead, etc.), but now using artificial intelligence. Social media data can also be used to infer information about larger social trends like the spread of misinformation. One particularly striking example of an attempt to infer information from seemingly unconnected data was someone noticing that the number of people sick with COVID-19 correlated with how many people were leaving bad reviews of Yankee Candles saying “they don’t have any scent” (note: COVID-19 can cause a loss of the ability to smell):
This shows how social media data can reveal personal traits, such as political views or susceptibility to scams, and societal trends, like misinformation. While unconventional data can provide insights, such as linking COVID-19 to bad candle reviews, concerns about privacy, bias, and the use of flawed methods remain significant in the world of social media and content
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
To go in a different direction for our last example, let’s look at an example of trolling as a form of protest. In the Black Lives Matters protests of 2020, Dallas Police made an app where they asked people to upload videos of protesters doing anything illegal. In support of the protesters, K-pop fans swarmed the app and uploaded as many K-pop videos as they could eventually leading to the app crashing and becoming unusable, and thus protecting the protesters from this attempt at Police surveillance. Read more at the Verge: K-pop stans overwhelm app after Dallas police ask for videos of protesters For another example of trolling as protests, this one with bots, see: A TikToker said he wrote code to flood Kellogg with bogus job applications after the company announced it would permanently replace striking workers
I feel like this example of trolling is more of a form of digital protest as they leveraged the disruption of this troll to actually support a cause. By using trolling in a way to overwhelm platforms with random content, it helps protestors fight against the surveillance and cooperate actions that social media websites take, and this can be a very effective from of activism.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Trolling is when an Internet user posts inauthentically (often false, upsetting, or strange) with the goal of causing disruption or provoking an emotional reaction. When the goal is provoking an emotional reaction, it is often for a negative emotion, such as anger or emotional pain. When the goal is disruption, it might be attempting to derail a conversation (e.g., concern trolling), or make a space no longer useful for its original purpose (e.g., joke product reviews), or try to get people to take absurd fake stories seriously.
I feel like one of the goals of trolling is to manipulate emotions by introducing and using fake content, which then often results in frustration or chaos on this social media pages which can create attention. Trolling does its part in truly undermining genuine conversation between users and it can also turn these social media pages and their comment sections into hostile enviroments.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
In 2016, when Donald Trump was running a campaign to be the US President, one twitter user pointed out that you could see which of the Tweets on Donald Trump’s Twitter account were posted from an Android phone and which from an iPhone, and that the tone was very different. A data scientist decided to look into it more and found: “My analysis … concludes that the Android and iPhone tweets are clearly from different people, “posting during different times of day and using hashtags, links, and retweets in distinct ways, “What’s more, we can see that the Android tweets are angrier and more negative, while the iPhone tweets tend to be benign announcements and pictures. …. this lets us tell the difference between the campaign’s tweets (iPhone) and Trump’s own (Android).” (Read more in this article from The Guardian) Note: we can no longer run code to check this ourselves because first, Donald Trump’s account was suspended in January 2021 for inciting violence, then when Elon Musk decided to reinstate Donald Trump’s account (using a Twitter poll as an excuse, but how many of the votes were bots?), Elon Musk also decided to remove the ability to look up a tweet’s source.
I believe that the contrast between the 2 different types of tweets, the ones made on Android and Apple truly highlight how the tone and the message can really differ depending on who is managing the account. This difference also shows how data analysis can play a role in finding patterns in tweets and forms of communication. It also can highlight the difference between personal and campaign driven tweets and posts.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
he way we present ourselves to others around us (our behavior, social role, etc.) is called our public persona. We also may change how we behave and speak depending on the situation or who we are around, which is called code-switching. While modified behaviors to present a persona or code switch may at first look inauthentic, they can be a way of authentically expressing ourselves in each particular setting. For example: Speaking in a formal manner when giving a presentation or answering questions in a courtroom may be a way of authentically sharing your experiences and emotions, but tailored to the setting Sharing those same experiences and emotions with a close friend may look very different, but still can be authentic Different communities have different expectations and meanings around behavior and presentation. So what is appropriate authentic behavior depends on what group you are from and what group you are interacting with, like this gif of President Obama below:
I believe that code-switching and our public personas allow us to navigate different social contexts and situations that we may be put in while also still being able to express authentic versions of ourselves. Adjusting our behavior to match different social situations reflects our understanding of social expectations and our ability to communicate in different ways.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
In the mid-1990s, some internet users started manually adding regular updates to the top of their personal websites (leaving the old posts below), using their sites as an online diary, or a (web) log of their thoughts. In 1998/1999, several web platforms were launched to make it easy for people to make and run blogs (e.g., LiveJournal and Blogger.com). With these blog hosting sites, it was much simpler to type up and publish a new blog entry, and others visiting your blog could subscribe to get updates whenever you posted a new post, and they could leave a comment on any of the posts.
I believe that with the rise of weblogs, it allowed individuals to be able to easily share their thoughts and experiences with a broader audience on the social media. The platform LiveJournal was able to foster and create early forms of online communities with the help of reader engagement and feedback on the platform.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
In 1997, the internet service provider AOL introduced a chat system called AOL Instant Messenger (AIM) that anyone could join and maintain a list of friends. You could then see what friends were currently available, and start sending them messages. You could also leave away messages or profile quotes. Fig. 5.4 AIM let you organize your contacts and see who was currently online.
I believe that AIM was an extremely pivotal in shaping the way that we interact online as it was able to introduce the way that we could use real time-digital communication and status updates to talk to other people. After reflecting on the influence that AIM had on digital communication, I believe it really helped set the foundation for many of the features that social media apps use today.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Now it’s your turn, choose some data that you might want to store on a social media type, and think through the storage types and constraints you might want to use: Age Name Address Relationship status etc.
Answering these questions I believe that for age I would use an integer with a constraint, for Name I would store it as a string that includes character constraints so it could handle all names but no one can type long random things. For address, I would use a string with structured fields so that someone can type in their street, city, state, and etc. For relationship status, I would use predefined options such as married, single, complicated so that someone can just chose out of the options so it makes it consistent and easy.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Metadata is information about some data. So we often think about a dataset as consisting of the main pieces of data (whatever those are in a specific situation), and whatever other information we have about that data (metadata). For example: If we think of a tweet’s contents (text and photos) as the main data of a tweet, then additional information such as the user, time, and responses would be considered metadata. If we download information about a set of tweets (text, user, time, etc.) to analyze later, we might consider that set of information as the main data, and our metadata might be information about our download process, such as when we collected the tweet information, which search term we used to find it, etc. Now that we’ve looked some at the data in a tweet, let’s look next at how different pieces of this information are saved.
Reflecting on the idea of Metadata, it is super interesting that the definition of this concept can actually shift and change based on the perspective of which the data is being analyzed with. I believe that because this definition is so flexible, it proves how important that metadata is for providing context as it shapes and helps us interpret the main data in situations.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Bots present a similar disconnect between intentions and actions. Bot programs are written by one or more people, potentially all with different intentions, and they are run by others people, or sometimes scheduled by people to be run by computers. This means we can analyze the ethics of the action of the bot, as well as the intentions of the various people involved, though those all might be disconnected.
The ethical analysis of bots I believe becomes complex because it involves examining both the actions of the bot and the intentions of those who use the bots. This disconnect between the programmers' motives and the bot's automated actions can lead to ethical gray areas, especially if the bot's behavior diverges from its original purpose.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Bots, on the other hand, will do actions through social media accounts and can appear to be like any other user. The bot might be the only thing posting to the account, or human users might sometimes use a bot to post for them
I believe that bots can raise concerns about the authenticity of content on social media platforms as there is the potential for bots to spread misinformation or influence public opinion without people being able to actually realize that they are being influenced by bots. I believe that some social media accounts need to make accounts known to other users if they are a bot account.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Platforms can be minimalist, like Yo, which only lets you say “yo” to people and nothing else. Platforms can also be tailored for specific groups of people, like a social media platforms for low-income blind people in India. Additionally, some sites are primarily built for other purposes but have a social media component as well, such as the Amazon online store that has user reviews and customer questions & answers, or news sites that have comment sections. There are many other varieties of social media sites, though hopefully we have at least covered a decent range of them.
Even though social media has caused some negative impacts on our society and has allowed for an easier way to spread fake news and propaganda, I believe that social media also plays a huge role in supporting our society and allowing people to develop and make connections. It fosters communication and collaboration on a global scale that was previously unimaginable and allows for work and projects to be done quicker and more accurately
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Only “Can we do this?” Never “should we do this? We’ve seen that same blasé attitude in how Twitter or Facebook deal w abuse/fake news.
This quote really intrigues me as it helps to highlight the ethical motivations that users have on twitter and how they can see that twitter doesn't really do much to stop these unethical and fake tweets and comments. This quote also highlights the emotions that Nanjani has with the safeguards that exist and how he believes they are not enough and it represents the tech world as of right now.
-