- Last 7 days
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
11.2.1. Individual vs. Systemic Analysis# Individual analysis focuses on the behavior, bias, and responsibility an individual has, while systemic analysis focuses on the how organizations and rules may have their own behaviors, biases, and responsibility that aren’t necessarily connected to what any individual inside intends. For example, there were differences in US criminal sentencing guidelines between crack cocaine vs. powder cocaine in the 90s. The guidelines suggested harsher sentences on the version of cocaine more commonly used by Black people, and lighter sentences on the version of cocaine more commonly used by white people. Therefore, when these guidelines were followed, they had have racially biased (that is, racist) outcomes regardless of intent or bias of the individual judges. (See: https://en.wikipedia.org/wiki/Fair_Sentencing_Act).
Individual analysis examines personal actions and intentions, while systemic analysis uncovers biases embedded in policies and institutions. For example, sentencing disparities for crack vs. powder cocaine in the 90s disproportionately impacted Black communities, showing how systemic biases can lead to unfair outcomes even without individual intent to discriminate.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
When social media platforms show users a series of posts, updates, friend suggestions, ads, or anything really, they have to use some method of determining which things to show users. The method of determining what is shown to users is called a recommendation algorithm, which is an algorithm (a series of steps or rules, such as in a computer program) that recommends posts for users to see, people for users to follow, ads for users to view, or reminders for users. Some recommendation algorithms can be simple such as reverse chronological order, meaning it shows users the latest posts (like how blogs work, or Twitter’s “See latest tweets” option). They can also be very complicated taking into account many factors, such as: Time since posting (e.g., show newer posts, or remind me of posts that were made 5 years ago today) Whether the post was made or liked by my friends or people I’m following How much this post has been liked, interacted with, or hovered over Which other posts I’ve been liking, interacting with, or hovering over What people connected to me or similar to me have been liking, interacting with, or hovering over What people near you have been liking, interacting with, or hovering over (they can find your approximate location, like your city, from your internet IP address, and they may know even more precisely) This perhaps explains why sometimes when you talk about something out loud it gets recommended to you (because someone around you then searched for it). Or maybe they are actually recording what you are saying and recommending based on that. Phone numbers or email addresses (sometimes collected deceptively) can be used to suggest friends or contacts. And probably many more factors as well!
Recommendation algorithms drive what we see online by analyzing our behavior, connections, and even our location to personalize content. They prioritize recent, popular, or similar posts, shaping user experience but also raising privacy concerns, especially as they leverage personal data. Balancing relevance with ethical transparency is essential.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
10.2.3. Making an environment work for all# Another strategy for managing disability is to use Universal Design, which originated in architecture. In universal design, the goal is to make environments and buildings have options so that there is a way for everyone to use it2. For example, a building with stairs might also have ramps and elevators, so people with different mobility needs (e.g., people with wheelchairs, baby strollers, or luggage) can access each area. In the elevators the buttons might be at a height that both short and tall people can reach. The elevator buttons might have labels both drawn (for people who can see them) and in braille (for people who cannot), and the ground floor button may be marked with a star, so that even those who cannot read can at least choose the ground floor. In this way of managing disabilities, the burden is put on the designers to make sure the environment works for everyone, though disabled people might need to go out of their way to access features of the environment. 10.2.4. Making a tool adapt to users# When creating computer programs, programmers can do things that aren’t possible with architecture (where Universal Design came out of), that is: programs can change how they work for each individual user. All people (including disabled people) have different abilities, and making a system that can modify how it runs to match the abilities a user has is called Ability based design. For example, a phone might detect that the user has gone from a dark to a light environment, and might automatically change the phone brightness or color scheme to be easier to read. Or a computer program might detect that a user’s hands tremble when they are trying to select something on the screen, and the computer might change the text size, or try to guess the intended selection. In this way of managing disabilities, the burden is put on the computer programmers and designers to detect and adapt to the disabled person.
Universal Design and Ability-Based Design showcase how thoughtful design can enhance inclusivity. By accommodating diverse needs in physical and digital spaces, these approaches shift responsibility from the individual to the designer. Such proactive adaptations promote accessibility, empowering everyone to navigate spaces and technology more independently and comfortably.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
A disability is an ability that a person doesn’t have, but that their society expects them to have.1 For example: If a building only has staircases to get up to the second floor (it was built assuming everyone could walk up stairs), then someone who cannot get up stairs has a disability in that situation. If a physical picture book was made with the assumption that people would be able to see the pictures, then someone who cannot see has a disability in that situation. If tall grocery store shelves were made with the assumption that people would be able to reach them, then people who are short, or who can’t lift their arms up, or who can’t stand up, all would have a disability in that situation. If an airplane seat was designed with little leg room, assuming people’s legs wouldn’t be too long, then someone who is very tall, or who has difficulty bending their legs would have a disability in that situation. Which abilities are expected of people, and therefore what things are considered disabilities, are socially defined. Different societies and groups of people make different assumptions about what people can do, and so what is considered a disability in one group, might just be “normal” in another.
This perspective highlights how disability is often situational and socially constructed. When society designs spaces assuming universal abilities, it creates barriers for those who differ. By rethinking accessibility to include diverse abilities, we can reduce exclusion and recognize disability as a reflection of society’s assumptions, not individual limitations.
-
- Oct 2024
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Besides hacking, there are other forms of privacy violations, such as: Unclear Privacy Rules: Sometimes privacy rules aren’t made clear to the people using a system. For example: If you send “private” messages on a work system, your boss might be able to read them. When Elon Musk purchased Twitter, he also was purchasing access to all Twitter Direct Messages Others Posting Without Permission: Someone may post something about another person without their permission. See in particular: The perils of ‘sharenting’: The parents who share too much Metadata: Sometimes the metadata that comes with content might violate someone’s privacy. For example, in 2012, former tech CEO John McAfee was a suspect in a murder in Belize, John McAfee hid out in secret. But when Vice magazine wrote an article about him, the photos in the story contained metadata with the exact location in Guatemala. Deanonymizing Data: Sometimes companies or researchers release datasets that have been “anonymized,” meaning that things like names have been removed, so you can’t directly see who the data is about. But sometimes people can still deduce who the anonymized data is about. This happened when Netflix released anonymized movie ratings data sets, but at least some users’ data could be traced back to them. Inferred Data: Sometimes information that doesn’t directly exist can be inferred through data mining (as we saw last chapter), and the creation of that new information could be a privacy violation. This includes the creation of Shadow Profiles, which are information about the user that the user didn’t provide or consent to Non-User Information: Social Media sites might collect information about people who don’t have accounts, like how Facebook does
Privacy violations extend beyond hacking to include unclear policies, unauthorized sharing, and the misuse of metadata. These breaches can expose personal information without consent, as seen in cases like John McAfee's or Netflix's data leak. Even anonymized data can be deanonymized, making privacy protections increasingly challenging in the digital age.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
.1. Privacy# There are many reasons, both good and bad, that we might want to keep information private. There might be some things that we just feel like aren’t for public sharing (like how most people wear clothes in public, hiding portions of their bodies) We might want to discuss something privately, avoiding embarrassment that might happen if it were shared publicly We might want a conversation or action that happens in one context not to be shared in another (context collapse) We might want to avoid the consequences of something we’ve done (whether ethically good or bad), so we keep the action or our identity private We might have done or said something we want to be forgotten or make at least made less prominent We might want to prevent people from stealing our identities or accounts, so we keep information (like passwords) private We might want to avoid physical danger from a stalker, so we might keep our location private We might not want to be surveilled by a company or government that could use our actions or words against us (whether what we did was ethically good or bad) When we use social media platforms though, we at least partially give up some of our privacy. For example, a social media application might offer us a way of “Private Messaging” (also called Direct Messaging) with another user. But in most cases those “private” messages are stored in the computers at those companies, and the company might have computer programs that automatically search through the messages, and people with the right permissions might be able to view them directly. In some cases we might want a social media company to be able to see our “private” messages, such as if someone was sending us death threats. We might want to report that user to the social media company for a ban, or to law enforcement (though many people have found law enforcement to be not helpful), and we want to open access to those “private” messages to prove that they were sent. 9.1.1. Privacy Rights# Some governments and laws protect the privacy of individuals (using a Natural Rights ethical framing). These include the European Union’s General Data Protection Regulation (GDPR), which includes a “right to be forgotten”, and the United State’s Supreme Court has at times inferred a constitutional right to privacy
Privacy is essential for personal security, autonomy, and maintaining control over one's information. It allows individuals to manage their reputation, avoid harm, and protect themselves from misuse of data by companies, governments, or individuals. However, social media platforms often compromise privacy, raising ethical concerns about surveillance and consent.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
People working with data sets always have to deal with problems in their data, stemming from things like mistyped data entries, missing data, and the general problem of all data being a simplification of reality. Sometimes a dataset has so many problems that it is effectively poisoned or not feasible to work with.
Unintentional and intentional data poisoning can severely compromise the usefulness of datasets. Whether caused by viral social media trends or deliberate sabotage, such as spamming job applications, these incidents highlight the vulnerability of data collection processes and the potential for disruption, often undermining research or organizational operations.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Social Media platforms use the data they collect on users and infer about users to increase their power and increase their profits. One of the main goals of social media sites is to increase the time users are spending on their social media sites. The more time users spend, the more money the site can get from ads, and also the more power and influence those social media sites have over those users. So social media sites use the data they collect to try and figure out what keeps people using their site, and what can they do to convince those users they need to open it again later. Social media sites then make their money by selling targeted advertising, meaning selling ads to specific groups of people with specific interests. So, for example, if you are selling spider stuffed animal toys, most people might not be interested, but if you could find the people who want those toys and only show your ads to them, your advertising campaign might be successful, and those users might be happy to find out about your stuffed animal toys. But targeting advertising can be used in less ethical ways, such as targeting gambling ads at children, or at users who are addicted to
Social media platforms use data to increase user engagement and profits through targeted advertising. While this can be useful for businesses and consumers, it raises ethical concerns when ads are directed at vulnerable groups, like children or addicts, exploiting their weaknesses for financial gain.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
In the early Internet message boards that were centered around different subjects, experienced users would “troll for newbies” by posting naive questions that all the experienced users were already familiar with. The “newbies” who didn’t realize this was a troll would try to engage and answer, and experienced users would feel superior and more part of the group knowing they didn’t fall for the troll like the “newbies” did. These message boards are where the word “troll” with this meaning comes from.
Trolling for newbies highlights the dynamics of online communities, where experienced users assert their status by tricking newcomers. This reinforces group identity through shared knowledge, but it also fosters exclusion and creates barriers for those trying to join, shaping the early culture of internet message boards with elitism.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Every “we” implies a not-“we”. A group is constituted in part by who it excludes. Think back to the origin of humans caring about authenticity: if being able to trust each other is so important, then we need to know WHICH people are supposed to be entangled in those bonds of mutual trust with us, and which are not from our own crew. As we have developed larger and larger societies, states, and worldwide communities, the task of knowing whom to trust has become increasingly large. All groups have variations within them, and some variations are seen as normal. But the bigger groups get, the more variety shows up, and starts to feel palpable. In a nation or community where you don’t know every single person, how do you decide who’s in your squad?
As societies grow, trust becomes harder to establish, making group identity more crucial. People often rely on shared values, cultural markers, or common experiences to determine who belongs. The need to define "we" versus "not-we" becomes a way to manage trust and maintain social cohesion in complex, diverse communities.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Select one of the above assumptions that you think is important to address. Then write a 1-2 sentence scenario where a user could not use Facebook as expected because of the assumption you selected. This represents one way the design could exclude certain users.
Scenario: A domestic abuse survivor trying to rebuild their life wants to connect with friends and family on Facebook, but they cannot use a pseudonym due to Facebook's policy requiring their legal name. By using their real name, they risk being found by their abuser, making the platform unsafe for them to use.
-
What assumptions does Facebook’s name policy make about its users’ identities and their needs that might not be true or might cause problems? List as many as you can think of (bullet points encouraged).
Assumes everyone uses a single, consistent name: The policy assumes that people use only one name across all contexts, but many people use different names in different settings (e.g., professional vs. personal). Assumes legal names reflect authentic identity: Some individuals may not feel that their legal name represents their true identity, such as members of the LGBTQ+ community who haven't legally changed their name yet. Overlooks cultural naming conventions: The policy doesn’t fully account for cultures with naming conventions that don’t fit Western norms, such as individuals with mononyms (one name) or complex, multi-part names. Ignores privacy and safety concerns: People facing threats (e.g., activists, abuse survivors) may need to use pseudonyms to protect themselves, and the policy could force them to reveal their real identity, putting them at risk. Assumes names are stable over time: It doesn't consider that people's names might change frequently due to personal or cultural reasons, making it difficult for users who need flexibility in updating their profiles.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Open two social media sites and choose equivalent views on each (e.g., a list of posts, an individual post, an author page etc.). List what actions are immediately available. Then explore and see what actions are available after one additional action (e.g., opening a menu), then what actions are two steps away. What do you notice about the similarities and differences in these sites?
I compared Twitter (now X) and Instagram post views. Both immediately allow likes, comments, and sharing. One step away, Twitter offers retweets and quoting, while Instagram allows saving posts. Two steps away, Twitter gives options like adding to lists, and Instagram provides reporting. Both focus on easy engagement but differ in extra features like lists vs. post saving.
-
Now it’s your turn to try designing a social media site. Decide a type of social media site (e.g., a video site like youtube or tiktok, or a dating site, etc.), and a particular view of that site (e.g., profile picture, post, comment, etc.). Draw a rough sketch of the view of the site, and then make a list of: What actions would you want available immediately What actions would you want one or two steps away? What actions would you not allow users to do (e.g., there is no button anywhere that will let you delete someone else’s account)?
For a social media site focused on collaborative learning (like a mix of Reddit and Khan Academy), I'd design a profile page showing posts, achievements, and study interests. Immediate actions: create posts, comment, upvote. One or two steps away: start discussions, follow users. Not allowed: deleting others' posts, editing achievements.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Can you think of an example of pernicious ignorance in social media interaction?
Pernicious ignorance in social media often occurs when users spread misinformation, ignoring facts and rejecting correction. For example, during public health crises, false claims about treatments or vaccines persist despite expert advice. This ignorance fuels division, harms public understanding, and undermines efforts to address real issues effectively.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
All data is a simplification of reality.
Data reduces vast details into manageable representations, often leaving out nuances. While useful for analysis, interpretation, and decision-making, it’s important to recognize its limitations and avoid oversimplifying or missing context.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Why do you think social media platforms allow bots to operate?
Social media platforms allow bots to operate because they can boost user engagement, automate tasks, and drive traffic. Some bots serve useful purposes like scheduling posts or providing updates. However, platforms may struggle to regulate harmful bots, which can spread misinformation or manipulate discussions, due to the scale and complexity of monitoring.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Bots, on the other hand, will do actions through social media accounts and can appear to be like any other user. The bot might be the only thing posting to the account, or human users might sometimes use a bot to post for them.
Bots on social media can mimic regular users, either posting autonomously or acting as tools for humans to post content. These automated systems can seamlessly blend into online communities, making it difficult to distinguish between human and bot interactions, thus influencing discussions or amplifying certain messages without appearing artificial.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Being and becoming an exemplary person (e.g., benevolent; sincere; honoring and sacrificing to ancestors; respectful to parents, elders and authorities, taking care of children and the young; generous to family and others). These traits are often performed and achieved through ceremonies and rituals (including sacrificing to ancestors, music, and tea drinking), resulting in a harmonious society. Key figures:
Confucianism, founded by Confucius in 6th-century BCE China, emphasizes morality, social harmony, and ethical conduct. Its core values include filial piety, respect for authority, and self-cultivation. Confucianism focuses on the importance of relationships, particularly family, and promotes virtues like benevolence, righteousness, and proper behavior to create a harmonious society.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
What do you think is the responsibility of tech workers to think through the ethical implications of what they are making?
Tech workers have a responsibility to consider the ethical implications of their creations, as their work can significantly impact society. They must prioritize user safety, privacy, fairness, and long-term consequences, ensuring that technology is used to benefit rather than harm. Ethical awareness fosters accountability and helps prevent unintended negative outcomes, shaping a more responsible tech industry.
-