- Last 7 days
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Knowing that there is a recommendation algorithm, users of the platform will try to do things to make the recommendation algorithm amplify their content. This is particularly important for people who make their money from social media content. For example, in the case of the simple “show latest posts” algorithm, the best way to get your content seen is to constantly post and repost your content (though if you annoy users too much, it might backfire). Other strategies include things like: Clickbait: trying to give you a mystery you have to click to find the answer to (e.g., “You won’t believe what happened when this person tried to eat a stapler!”). They do this to boost clicks on their link, which they hope boosts them in the recommendation algorithm, and gets their ads more views Trolling: by provoking reactions, they hope to boost their content more Coordinated actions: have many accounts (possibly including bots) like a post, or many people use a hashtag, or have people trade positive reviews Youtuber F.D. Signifier explores the YouTube recommendation algorithm and interviews various people about their experiences (particularly Black Youtubers like himself) in this video (it’s very long, so we’ll put some key quotes below):
I personally believe that recommendation algorithms on sites like Youtube influences and forces influencers to modify their behavior so that they can gain visibility on the site, especially influencers who depend on these sites for income and their life. This ultimately creates pressure to use strategies like clickbait to manipulate how many people interact with the content and content quality to pursue higher income from these sites.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Individual analysis focuses on the behavior, bias, and responsibility an individual has, while systemic analysis focuses on the how organizations and rules may have their own behaviors, biases, and responsibility that aren’t necessarily connected to what any individual inside intends. For example, there were differences in US criminal sentencing guidelines between crack cocaine vs. powder cocaine in the 90s. The guidelines suggested harsher sentences on the version of cocaine more commonly used by Black people, and lighter sentences on the version of cocaine more commonly used by white people. Therefore, when these guidelines were followed, they had have racially biased (that is, racist) outcomes regardless of intent or bias of the individual judges. (See: https://en.wikipedia.org/wiki/Fair_Sentencing_Act).
This paragraph highlights the idea of how systemic biases can ultimately lead to discrimination, even when these individual actors may not have biased intentions. This ultimately shows the overall importance of examining policies and guidelines, as they can perpetuate inequality in ways that an individual alone may overlook and not care about.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
A disability is an ability that a person doesn’t have, but that their society expects them to have.1 For example: If a building only has staircases to get up to the second floor (it was built assuming everyone could walk up stairs), then someone who cannot get up stairs has a disability in that situation. If a physical picture book was made with the assumption that people would be able to see the pictures, then someone who cannot see has a disability in that situation. If tall grocery store shelves were made with the assumption that people would be able to reach them, then people who are short, or who can’t lift their arms up, or who can’t stand up, all would have a disability in that situation. If an airplane seat was designed with little leg room, assuming people’s legs wouldn’t be too long, then someone who is very tall, or who has difficulty bending their legs would have a disability in that situation. Which abilities are expected of people, and therefore what things are considered disabilities, are socially defined. Different societies and groups of people make different assumptions about what people can do, and so what is considered a disability in one group, might just be “normal” in another.
This paragraph highlights that disabilities are often socially defined, arising from assumptions about what abilities people are expected to have in specific environments. By acknowledging that these expectations vary across societies, it highlights the role of social norms in helping disabling individuals based on their diverse abilities.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Those with disabilities often find ways to cope with their disability, that is, find ways to work around difficulties they encounter and seek out places and strategies that work for them (whether realizing they have a disability or not). Additionally, people with disabilities might change their behavior (whether intentionally or not) to hide the fact that they have a disability, which is called masking and may take a mental or physical toll on the person masking, which others around them won’t realize. For example, kids who are nearsighted and don’t realize their ability to see is different from other kids will often seek out seats at the front of classrooms where they can see better. As for us two authors, we both have ADHD and were drawn to PhD programs where our tendency to hyperfocus on following our curiosity was rewarded (though executive dysfunction with finishing projects created challenges)1. This way of managing disabilities puts the burden fully on disabled people to manage their disability in a world that was not designed for them, trying to fit in with “normal” people.
This highlights how people with disabilities often adapt to their environment in ways that help them cope, whether through finding effective strategies or masking to blend in, which can be exhausting. By placing the burden of adaptation on individuals, society reinforces an inequitable system that overlooks the need for inclusive design and broader support.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
We want to provide you, the reader, a chance to explore online privacy more. In this activity, you will be looking at an official brochure on the EU’s GDPR privacy law1. We will again follow the five-step CIDER method (Critique, Imagine, Design, Expand, Repeat). So read through the official brochure on the EU’s GDPR privacy law (for this activity ignore any additional details or clarifications made elsewhere in the GDPR, since those weren’t deemed important enough to put on this brochure). Then do the following (preferably on paper or in a blank computer document):
This framework allows for a way to critically examine online privacy through the EU’s GDPR framework using the CIDER method, encouraging an inclusive perspective on user assumptions and biases. By engaging in each step of the method, people can develop a deeper understanding of privacy law implications and the diverse needs of digital users.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
While we have our concerns about the privacy of our information, we often share it with social media platforms under the understanding that they will hold that information securely. But social media companies often fail at keeping our information secure. For example, the proper security practice for storing user passwords is to use a special individual encryption process for each individual password. This way the database can only confirm that a password was the right one, but it can’t independently look up what the password is or even tell if two people used the same password. Therefore if someone had access to the database, the only way to figure out the right password is to use “brute force,” that is, keep guessing passwords until they guess the right one (and each guess takes a lot of time). But while that is the proper security for storing passwords. So for example, Facebook stored millions of Instagram passwords in plain text, meaning the passwords weren’t encrypted and anyone with access to the database could simply read everyone’s passwords. And Adobe encrypted their passwords improperly and then hackers leaked their password database of 153 million users. From a security perspective there are many risks that a company faces, such as: Employees at the company misusing their access, like Facebook employees using their database permissions to stalk women Hackers finding a vulnerability and inserting, modifying, or downloading information. For example: hackers stealing the names, Social Security numbers, and birthdates of 143 million Americans from Equifax hackers posting publicly the phone numbers, names, locations, and some email addresses of 530 million Facebook users, or about 7% of all people on Earth Hacking attempts can be made on individuals, whether because the individual is the goal target, or because the individual works at a company which is the target. Hackers can target individuals with attacks like: Password reuse attacks, where if they find out your password from one site, they try that password on many other sites Hackers tricking a computer into thinking they are another site, for example: the US NSA impersonated Google Social engineering, where they try to gain access to information or locations by tricking people. For example: Phishing attacks, where they make a fake version of a website or app and try to get you to enter your information or password into it. Some people have made malicious QR codes to take you to a phishing site. Many of the actions done by the con-man Frank Abagnale, which were portrayed in the movie Catch Me If You Can One of the things you can do as an individual to better protect yourself against hacking is to enable 2-factor authentication on your accounts.
This passage talks about and highlights the risks and challenges of data security on social media platforms, where even big companies fail to protect users information. It emphasizes the importance of individual responsibility, such as enabling two-factor authentication, to help mitigate risks of potential data breaches and hacking attacks.
-
- Oct 2024
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
When we’ve been accessing Reddit through Python and the “PRAW” code library. The praw code library works by sending requests across the internet to Reddit, using what is called an “application programming interface” or API for short. APIs have a set of rules for what requests you can make, what happens when you make the request, and what information you can get back. If you are interested in learning more about what you can do with praw and what information you can get back, you can look at the official documentation for those. But be warned they are not organized in a friendly way for newcomers and take some getting used to to figure out what these documentation pages are talking about. So, if you are interested, you can look at the praw library documentation to find out what the library can do (again, not organized in a beginner-friendly way). You can learn a little more by clicking on the praw models and finding a list of the types of data for each of the models, and a list of functions (i.e., actions) you can do with them.
I believe that API's are very important as they like Reddit’s PRAW library, enable structured access to data. While the PRAW documentation offers deeper insights into its capabilities, it may be challenging to navigate, requiring patience and exploration to master its use to ensure that its full potential and purpose is being utilized.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
For example, social media data about who you are friends with might be used to infer your sexual orientation. Social media data might also be used to infer people’s: Race Political leanings Interests Susceptibility to financial scams Being prone to addiction (e.g., gambling) Additionally, groups keep trying to re-invent old debunked pseudo-scientific (and racist) methods of judging people based on facial features (size of nose, chin, forehead, etc.), but now using artificial intelligence. Social media data can also be used to infer information about larger social trends like the spread of misinformation. One particularly striking example of an attempt to infer information from seemingly unconnected data was someone noticing that the number of people sick with COVID-19 correlated with how many people were leaving bad reviews of Yankee Candles saying “they don’t have any scent” (note: COVID-19 can cause a loss of the ability to smell):
This shows how social media data can reveal personal traits, such as political views or susceptibility to scams, and societal trends, like misinformation. While unconventional data can provide insights, such as linking COVID-19 to bad candle reviews, concerns about privacy, bias, and the use of flawed methods remain significant in the world of social media and content
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
To go in a different direction for our last example, let’s look at an example of trolling as a form of protest. In the Black Lives Matters protests of 2020, Dallas Police made an app where they asked people to upload videos of protesters doing anything illegal. In support of the protesters, K-pop fans swarmed the app and uploaded as many K-pop videos as they could eventually leading to the app crashing and becoming unusable, and thus protecting the protesters from this attempt at Police surveillance. Read more at the Verge: K-pop stans overwhelm app after Dallas police ask for videos of protesters For another example of trolling as protests, this one with bots, see: A TikToker said he wrote code to flood Kellogg with bogus job applications after the company announced it would permanently replace striking workers
I feel like this example of trolling is more of a form of digital protest as they leveraged the disruption of this troll to actually support a cause. By using trolling in a way to overwhelm platforms with random content, it helps protestors fight against the surveillance and cooperate actions that social media websites take, and this can be a very effective from of activism.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Trolling is when an Internet user posts inauthentically (often false, upsetting, or strange) with the goal of causing disruption or provoking an emotional reaction. When the goal is provoking an emotional reaction, it is often for a negative emotion, such as anger or emotional pain. When the goal is disruption, it might be attempting to derail a conversation (e.g., concern trolling), or make a space no longer useful for its original purpose (e.g., joke product reviews), or try to get people to take absurd fake stories seriously.
I feel like one of the goals of trolling is to manipulate emotions by introducing and using fake content, which then often results in frustration or chaos on this social media pages which can create attention. Trolling does its part in truly undermining genuine conversation between users and it can also turn these social media pages and their comment sections into hostile enviroments.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
In 2016, when Donald Trump was running a campaign to be the US President, one twitter user pointed out that you could see which of the Tweets on Donald Trump’s Twitter account were posted from an Android phone and which from an iPhone, and that the tone was very different. A data scientist decided to look into it more and found: “My analysis … concludes that the Android and iPhone tweets are clearly from different people, “posting during different times of day and using hashtags, links, and retweets in distinct ways, “What’s more, we can see that the Android tweets are angrier and more negative, while the iPhone tweets tend to be benign announcements and pictures. …. this lets us tell the difference between the campaign’s tweets (iPhone) and Trump’s own (Android).” (Read more in this article from The Guardian) Note: we can no longer run code to check this ourselves because first, Donald Trump’s account was suspended in January 2021 for inciting violence, then when Elon Musk decided to reinstate Donald Trump’s account (using a Twitter poll as an excuse, but how many of the votes were bots?), Elon Musk also decided to remove the ability to look up a tweet’s source.
I believe that the contrast between the 2 different types of tweets, the ones made on Android and Apple truly highlight how the tone and the message can really differ depending on who is managing the account. This difference also shows how data analysis can play a role in finding patterns in tweets and forms of communication. It also can highlight the difference between personal and campaign driven tweets and posts.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
he way we present ourselves to others around us (our behavior, social role, etc.) is called our public persona. We also may change how we behave and speak depending on the situation or who we are around, which is called code-switching. While modified behaviors to present a persona or code switch may at first look inauthentic, they can be a way of authentically expressing ourselves in each particular setting. For example: Speaking in a formal manner when giving a presentation or answering questions in a courtroom may be a way of authentically sharing your experiences and emotions, but tailored to the setting Sharing those same experiences and emotions with a close friend may look very different, but still can be authentic Different communities have different expectations and meanings around behavior and presentation. So what is appropriate authentic behavior depends on what group you are from and what group you are interacting with, like this gif of President Obama below:
I believe that code-switching and our public personas allow us to navigate different social contexts and situations that we may be put in while also still being able to express authentic versions of ourselves. Adjusting our behavior to match different social situations reflects our understanding of social expectations and our ability to communicate in different ways.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
In the mid-1990s, some internet users started manually adding regular updates to the top of their personal websites (leaving the old posts below), using their sites as an online diary, or a (web) log of their thoughts. In 1998/1999, several web platforms were launched to make it easy for people to make and run blogs (e.g., LiveJournal and Blogger.com). With these blog hosting sites, it was much simpler to type up and publish a new blog entry, and others visiting your blog could subscribe to get updates whenever you posted a new post, and they could leave a comment on any of the posts.
I believe that with the rise of weblogs, it allowed individuals to be able to easily share their thoughts and experiences with a broader audience on the social media. The platform LiveJournal was able to foster and create early forms of online communities with the help of reader engagement and feedback on the platform.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
In 1997, the internet service provider AOL introduced a chat system called AOL Instant Messenger (AIM) that anyone could join and maintain a list of friends. You could then see what friends were currently available, and start sending them messages. You could also leave away messages or profile quotes. Fig. 5.4 AIM let you organize your contacts and see who was currently online.
I believe that AIM was an extremely pivotal in shaping the way that we interact online as it was able to introduce the way that we could use real time-digital communication and status updates to talk to other people. After reflecting on the influence that AIM had on digital communication, I believe it really helped set the foundation for many of the features that social media apps use today.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Now it’s your turn, choose some data that you might want to store on a social media type, and think through the storage types and constraints you might want to use: Age Name Address Relationship status etc.
Answering these questions I believe that for age I would use an integer with a constraint, for Name I would store it as a string that includes character constraints so it could handle all names but no one can type long random things. For address, I would use a string with structured fields so that someone can type in their street, city, state, and etc. For relationship status, I would use predefined options such as married, single, complicated so that someone can just chose out of the options so it makes it consistent and easy.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Metadata is information about some data. So we often think about a dataset as consisting of the main pieces of data (whatever those are in a specific situation), and whatever other information we have about that data (metadata). For example: If we think of a tweet’s contents (text and photos) as the main data of a tweet, then additional information such as the user, time, and responses would be considered metadata. If we download information about a set of tweets (text, user, time, etc.) to analyze later, we might consider that set of information as the main data, and our metadata might be information about our download process, such as when we collected the tweet information, which search term we used to find it, etc. Now that we’ve looked some at the data in a tweet, let’s look next at how different pieces of this information are saved.
Reflecting on the idea of Metadata, it is super interesting that the definition of this concept can actually shift and change based on the perspective of which the data is being analyzed with. I believe that because this definition is so flexible, it proves how important that metadata is for providing context as it shapes and helps us interpret the main data in situations.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Bots present a similar disconnect between intentions and actions. Bot programs are written by one or more people, potentially all with different intentions, and they are run by others people, or sometimes scheduled by people to be run by computers. This means we can analyze the ethics of the action of the bot, as well as the intentions of the various people involved, though those all might be disconnected.
The ethical analysis of bots I believe becomes complex because it involves examining both the actions of the bot and the intentions of those who use the bots. This disconnect between the programmers' motives and the bot's automated actions can lead to ethical gray areas, especially if the bot's behavior diverges from its original purpose.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Bots, on the other hand, will do actions through social media accounts and can appear to be like any other user. The bot might be the only thing posting to the account, or human users might sometimes use a bot to post for them
I believe that bots can raise concerns about the authenticity of content on social media platforms as there is the potential for bots to spread misinformation or influence public opinion without people being able to actually realize that they are being influenced by bots. I believe that some social media accounts need to make accounts known to other users if they are a bot account.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Platforms can be minimalist, like Yo, which only lets you say “yo” to people and nothing else. Platforms can also be tailored for specific groups of people, like a social media platforms for low-income blind people in India. Additionally, some sites are primarily built for other purposes but have a social media component as well, such as the Amazon online store that has user reviews and customer questions & answers, or news sites that have comment sections. There are many other varieties of social media sites, though hopefully we have at least covered a decent range of them.
Even though social media has caused some negative impacts on our society and has allowed for an easier way to spread fake news and propaganda, I believe that social media also plays a huge role in supporting our society and allowing people to develop and make connections. It fosters communication and collaboration on a global scale that was previously unimaginable and allows for work and projects to be done quicker and more accurately
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Only “Can we do this?” Never “should we do this? We’ve seen that same blasé attitude in how Twitter or Facebook deal w abuse/fake news.
This quote really intrigues me as it helps to highlight the ethical motivations that users have on twitter and how they can see that twitter doesn't really do much to stop these unethical and fake tweets and comments. This quote also highlights the emotions that Nanjani has with the safeguards that exist and how he believes they are not enough and it represents the tech world as of right now.
-