- Last 7 days
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Knowing that there is a recommendation algorithm, users of the platform will try to do things to make the recommendation algorithm amplify their content. This is particularly important for people who make their money from social media content. For example, in the case of the simple “show latest posts” algorithm, the best way to get your content seen is to constantly post and repost your content (though if you annoy users too much, it might backfire).
This paragraph effectively highlights how users attempt to leverage recommendation algorithms to amplify their content, which is especially important for individuals who make a living from social media. The example of the "show latest posts" algorithm illustrates that frequently posting and reposting content can be a simple way to increase visibility, though it also carries the risk of annoying followers. This strategy reflects the tactical thinking of social media users in trying to balance maximizing content reach while avoiding negative backlash from overexposure.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Individual analysis focuses on the behavior, bias, and responsibility an individual has, while systemic analysis focuses on the how organizations and rules may have their own behaviors, biases, and responsibility that aren’t necessarily connected to what any individual inside intends. For example, there were differences in US criminal sentencing guidelines between crack cocaine vs. powder cocaine in the 90s. The guidelines suggested harsher sentences on the version of cocaine more commonly used by Black people, and lighter sentences on the version of cocaine more commonly used by white people. Therefore, when these guidelines were followed, they had have racially biased (that is, racist) outcomes regardless of intent or bias of the individual judges. (See: https://en.wikipedia.org/wiki/Fair_Sentencing_Act).
This passage highlights how systemic analysis can reveal biases and inequalities embedded in organizational rules, independent of individual intentions. The example of 1990s U.S. sentencing guidelines for crack vs. powder cocaine demonstrates how policies can produce racially biased outcomes, disproportionately impacting Black people. It underscores the importance of examining systems critically, as structural biases can lead to injustice even without individual prejudice.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
A disability is an ability that a person doesn’t have, but that their society expects them to have.1 For example: If a building only has staircases to get up to the second floor (it was built assuming everyone could walk up stairs), then someone who cannot get up stairs has a disability in that situation. If a physical picture book was made with the assumption that people would be able to see the pictures, then someone who cannot see has a disability in that situation. If tall grocery store shelves were made with the assumption that people would be able to reach them, then people who are short, or who can’t lift their arms up, or who can’t stand up, all would have a disability in that situation. If an airplane seat was designed with little leg room, assuming people’s legs wouldn’t be too long, then someone who is very tall, or who has difficulty bending their legs would have a disability in that situation.
Disabilities often arise from societal assumptions about what people are capable of, rather than from inherent limitations, highlighting the importance of inclusive design. Many environments and products are created with a “one-size-fits-all” approach, unintentionally excluding those who don't fit the standard expectations. By designing spaces and objects that accommodate diverse needs, society can reduce the limitations that people with disabilities face.
-
Some disabilities are visible disabilities that other people can notice by observing the disabled person (e.g., wearing glasses is an indication of a visual disability, or a missing limb might be noticeable). Other disabilities are invisible disabilities that other people cannot notice by observing the disabled person (e.g., chronic fatigue syndrome, contact lenses for a visual disability, or a prosthetic for a missing limb covered by clothing). Sometimes people with invisible disabilities get unfairly accused of “faking” or “making up” their disability (e.g., someone who can walk short distances but needs to use a wheelchair when going long distances).
Invisible disabilities are often misunderstood, leading people who have them to face accusations of "faking" or "pretending," which is unfair and hurtful. While some disabilities are visibly noticeable, others are hidden beneath the surface; society needs greater awareness and empathy to truly support these individuals. When assessing others' health or abilities, we should remember that appearances don't tell the whole story, and we should avoid making quick judgments.
-
- Oct 2024
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
While we have our concerns about the privacy of our information, we often share it with social media platforms under the understanding that they will hold that information securely. But social media companies often fail at keeping our information secure. For example, the proper security practice for storing user passwords is to use a special individual encryption process for each individual password. This way the database can only confirm that a password was the right one, but it can’t independently look up what the password is or even tell if two people used the same password. Therefore if someone had access to the database, the only way to figure out the right password is to use “brute force,” that is, keep guessing passwords until they guess the right one (and each guess takes a lot of time). But while that is the proper security for storing passwords. So for example, Facebook stored millions of Instagram passwords in plain text, meaning the passwords weren’t encrypted and anyone with access to the database could simply read everyone’s passwords. And Adobe encrypted their passwords improperly and then hackers leaked their password database of 153 million users.
This passage emphasizes the expectations we have of social media platforms to keep our information secure and highlights notable failures in doing so. It underscores how improper security practices, such as storing passwords in plain text or using weak encryption, leave users vulnerable to data breaches. The examples of Facebook and Adobe demonstrate the serious consequences of these lapses, reminding us of the critical importance of robust security measures for protecting user data.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
There are many reasons, both good and bad, that we might want to keep information private. There might be some things that we just feel like aren’t for public sharing (like how most people wear clothes in public, hiding portions of their bodies) We might want to discuss something privately, avoiding embarrassment that might happen if it were shared publicly We might want a conversation or action that happens in one context not to be shared in another (context collapse) We might want to avoid the consequences of something we’ve done (whether ethically good or bad), so we keep the action or our identity private We might have done or said something we want to be forgotten or make at least made less prominent We might want to prevent people from stealing our identities or accounts, so we keep information (like passwords) private We might want to avoid physical danger from a stalker, so we might keep our location private We might not want to be surveilled by a company or government that could use our actions or words against us (whether what we did was ethically good or bad) When we use social media platforms though, we at least partially give up some of our privacy. For example, a social media application might offer us a way of “Private Messaging” (also called Direct Messaging) with another user. But in most cases those “private” messages are stored in the computers at those companies, and the company might have computer programs that automatically search through the messages, and people with the right permissions might be able to view them directly.
This passage effectively highlights the diverse reasons why people value privacy, ranging from maintaining dignity to protecting themselves from harm. It also raises important concerns about the illusion of privacy on social media, where supposedly private communications are still accessible to companies. The contrast between the need for privacy and the reality of online platforms prompts a critical discussion on how much control we really have over our personal information.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Datasets can be poisoned unintentionally. For example, many scientists posted online surveys that people can get paid to take. Getting useful results depended on a wide range of people taking them. But when one TikToker’s video about taking them went viral, the surveys got filled out with mostly one narrow demographic, preventing many of the datasets from being used as intended. See more in
This passage illustrates how datasets can be unintentionally skewed, compromising the integrity of data collection. The viral spread of information on platforms like TikTok can lead to overrepresentation of a specific demographic, limiting the usefulness of the surveys. It highlights the challenges of maintaining diverse and balanced datasets, especially in an open, online environment.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Social media sites then make their money by selling targeted advertising, meaning selling ads to specific groups of people with specific interests. So, for example, if you are selling spider stuffed animal toys, most people might not be interested, but if you could find the people who want those toys and only show your ads to them, your advertising campaign might be successful, and those users might be happy to find out about your stuffed animal toys. But targeting advertising can be used in less ethical ways, such as targeting gambling ads at children, or at users who are addicted to gambling, or the 2016 Trump campaign ‘target[ing] 3.5m black Americans to deter them from voting’
This passage highlights how social media platforms monetize by targeting advertisements to specific audiences, which can be beneficial when promoting niche products. However, it also raises concerns about the ethical implications of targeting vulnerable groups or manipulating users for political gain, as seen in the 2016 Trump campaign. This dual nature of targeted advertising calls for a balance between business interests and ethical responsibility.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Film Crit Hulk goes on to say that the “don’t feed the trolls” advice puts the burden on victims of abuse to stop being abused, giving all the power to trolls. Instead, Film Crit Hulk suggests giving power to the victims and using “skilled moderation and the willingness to kick people off platforms for violating rules about abuse”
This passage argues that “don’t feed the trolls” puts the burden on victims, giving trolls too much power. Instead, Film Crit Hulk advocates for empowering victims through strong moderation and removing abusers from platforms. It highlights the importance of holding platforms accountable for creating safe online spaces.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Trolling is when an Internet user posts inauthentically (often false, upsetting, or strange) with the goal of causing disruption or provoking an emotional reaction. When the goal is provoking an emotional reaction, it is often for a negative emotion, such as anger or emotional pain. When the goal is disruption, it might be attempting to derail a conversation (e.g., concern trolling), or make a space no longer useful for its original purpose (e.g., joke product reviews), or try to get people to take absurd fake stories seriously.
This passage defines trolling by emphasizing its two main goals: provoking negative emotions and causing disruption. Trolling often involves insincere behavior, like spreading false information or derailing conversations. It negatively affects both individual emotions and the quality of online spaces. Trolling also reflects the challenges of online anonymity and the openness of social media.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
This trend brought complicated issues of authenticity because presumably there was some human employee that got charged with running the company’s social media account. We are simultaneously aware that, on the one hand, that human employee may be expressing themselves authentically (whether playfully or about serious issues), but also that human is at the mercy of the corporation and the corporation can at any moment tell that human to stop or replace that human with another.
This paragraph effectively captures the tension between authenticity and corporate control in social media management. It highlights the duality where an employee may express genuine thoughts but remains bound to the corporation's directives, making their authenticity conditional and fragile.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
On social media, context collapse is a common concern, since on a social networking site you might be connected to very different people (family, different groups of friends, co-workers, etc.). Additionally, something that was shared within one context (like a private message), might get reposted in another context (publicly posted elsewhere).
the paragraph introduces an important and complex issue concisely. Adding a few more details about its effects on behavior, specific examples, and emotional impact could make it even more insightful.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
When computers store numbers, there are limits to how much space is can be used to save each number. This limits how big (or small) the numbers can be, and causes rounding with floating-point numbers. Additionally, programming languages might include other ways of storing numbers, such as fractions, complex numbers, or limited number sets (like only positive integers).
When computers store numbers, they are limited by memory space, which may lead to problems such as floating-point precision errors and integer overflows. Different programming languages provide special data types such as fractions and complex numbers to solve these limitations.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
The 1980s and 1990s also saw an emergence of more instant forms of communication with chat applications. Internet Relay Chat (IRC) lets people create “rooms” for different topics, and people could join those rooms and participate in real-time text conversations with the others in the room.
IRC was influential in the early development of online communities, offering a decentralized, flexible, and open environment for communication, which contributed to its popularity in the 1990s. Although it's less popular now, IRC still has a dedicated user base, and its influence can be seen in modern chat tools like Slack, Discord, and others.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
As you can see in the apple example, any time we turn something into data, we are making a simplification.1 If we are counting the number of something, like apples, we are deciding that each one is equivalent. If we are writing down what someone said, we are losing their tone of voice, accent, etc. If we are taking a photograph, it is only from one perspective, etc. Different simplifications are useful for different tasks. Any given simplification will be helpful for some tasks and be unhelpful for others. See also, this saying in statistics: All models are wrong, but some are useful
The article's apple example ignores the variations in each apple, such as size, color, and quality, by simply counting the quantity of apples. Similar to this, when you record a conversation, the emotional details like tone and intonation are lost even though the text material is recorded. Moreover, taking a picture can only depict a portion of the scene; it cannot depict the entire scene. Every simplification technique has its limitations, but the effectiveness of each technique is determined by how well it can deliver relevant information for a given task in a given situation.
-