- Last 7 days
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
When social media platforms show users a series of posts, updates, friend suggestions, ads, or anything really, they have to use some method of determining which things to show users. The method of determining what is shown to users is called a recommendation algorithm, which is an algorithm (a series of steps or rules, such as in a computer program) that recommends posts for users to see, people for users to follow, ads for users to view, or reminders for users. Some recommendation algorithms can be simple such as reverse chronological order, meaning it shows users the latest posts (like how blogs work, or Twitter’s “See latest tweets” option). They can also be very complicated taking into account many factors, such as:
This section illustrates how recommendation algorithms influence what we see online, balancing basic ways like displaying recent postings with more complicated aspects like interactions and geography. It's fascinating how these algorithms impact our decisions and habits without our knowledge.However, it raises issues about privacy and exploitation, particularly when suggestions are based on data gathered without explicit agreement. These algorithms can also produce echo chambers, which limit the diversity of viewpoints. Should social media sites be more open about how their algorithms function, or will this lead to new problems, such as individuals attempting to abuse them?
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
11.2.1. Individual vs. Systemic Analysis# Individual analysis focuses on the behavior, bias, and responsibility an individual has, while systemic analysis focuses on the how organizations and rules may have their own behaviors, biases, and responsibility that aren’t necessarily connected to what any individual inside intends. For example, there were differences in US criminal sentencing guidelines between crack cocaine vs. powder cocaine in the 90s. The guidelines suggested harsher sentences on the version of cocaine more commonly used by Black people, and lighter sentences on the version of cocaine more commonly used by white people. Therefore, when these guidelines were followed, they had have racially biased (that is, racist) outcomes regardless of intent or bias of the individual judges. (See: https://en.wikipedia.org/wiki/Fair_Sentencing_Act).
This section illustrates how recommendation algorithms impact what we see online, balancing basic tactics like displaying recent articles with more complicated aspects like interactions and geography. It's fascinating how these algorithms impact our decisions and actions without our knowledge. However, it creates privacy and manipulation problems, particularly when suggestions are made based on data gathered without explicit authorization. These algorithms can also generate echo chambers, restricting varied perspectives. Should social media sites be more open about how their algorithms function, or will this lead to more issues, such as individuals attempting to exploit them?
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
10.1. Disability
This section encourages us to evaluate disability as a social construct shaped by preconceived beliefs about what abilities are "normal." For example, trichromats are not considered "disabled," despite the fact that they lack the additional color sensitivity that tetrachromats do. It serves as a powerful reminder that disabilities are frequently imposed barriers rather than inherent limitations. This makes me wonder how much more inclusive our environments could be if they accommodated a broader spectrum of talents from the outset.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
10.2.1. Coping Strategies
After reading this, I realized that coping mechanisms such as masking place the whole weight on people with impairments to adapt to situations that were not meant for them. However, I believe it underscores a wider issue: when systems are inaccessible, disabled people are unfairly expected to find ways to fit in, often at considerable personal cost. And I'm wondering if there's a better way for society to support people with invisible disabilities so they don't feel compelled to hide or adapt alone.
-
- Oct 2024
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
There are many reasons, both good and bad, that we might want to keep information private.
Recent social media leaks show how easy private material can propagate, even when context collapse can result in unintentional exposure of private chats. Platforms must guarantee improved privacy measures since the boundaries between private and public online interactions are becoming increasingly hazy.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
While we have our concerns about the privacy of our information, we often share it with social media platforms under the understanding that they will hold that information securely. But social media companies often fail at keeping our information secure.
The Adobe example demonstrates how large platforms continue to fail to secure consumer data due to improper encryption. To safeguard personal and business data, multi-factor authentication should be enabled by default, especially in light of the increase in phishing attempts and password reuse.
-
While we have our concerns about the privacy of our information, we often share it with social media platforms under the understanding that they will hold that information securely. But social media companies often fail at keeping our information secure.
Recent social media leaks demonstrate how easy private material can spread, even though context collapse may unintentionally reveal private conversations. Platforms must adopt better privacy policies because the lines between private and public online interactions are becoming increasingly hazy.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Data can be poisoned intentionally as well. For example, in 2021, workers at Kellogg’s were upset at their working conditions, so they agreed to go on strike, and not work until Kellogg’s agreed to improve their work conditions. Kellogg’s announced that they would hire new workers to replace the striking workers: Kellogg’s proposed pay and benefits cuts while forcing workers to work severe overtime as long as 16-hour-days for seven days a week. Some workers stayed on the job for months without a single day off. The company refuses to meet the union’s proposals for better pay, hours, and benefits, so they went on strike. Earlier this week, the company announced it would permanently replace 1,400 striking workers. People Are Spamming Kellogg’s Job Applications in Solidarity with Striking Workers – Vice MotherBoard People in the antiwork subreddit found the website where Kellogg’s posted their job listing to replace the workers. So those Redditors suggested they spam the site with fake applications, poisoning the job application data, so Kellogg’s wouldn’t be able to figure out which applications were legitimate or not (we could consider this a form of trolling). Then Kellogg’s wouldn’t be able to replace the striking workers, and they would have to agree to better working conditions. Then Sean Black, a programmer on TikTok saw this and decided to contribute by creating a bot that would automatically log in and fill out applications with random user info, increasing the rate at which he (and others who used his code) could spam the Kellogg’s job applications:
The Kellogg's incident, in which the general public used electronic means to interfere with the company's operations, is a fascinating illustration of collective resistance in the digital age. It, in my opinion, represents society's reaction to large businesses abusing their power. But this kind of "data poisoning" begs the moral dilemma of how to define an appropriate mode of dissent. Although flooding the hiring process with phony applications paralyzes it, it may have unintended consequences for other parties, therefore a deeper analysis of the boundaries of technical confrontation is required.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Social media platforms collect various types of data on their users. Some data is directly provided to the platform by the users. Platforms may ask users for information like: email address name profile picture interests friends Platforms also collect information on how users interact with the site. They might collect information like (they don’t necessarily collect all this, but they might): when users are logged on and logged off who users interact with What users click on what posts users pause over where users are located what users send in direct messages to each other Online advertisers can see what pages their ads are being requested on, and track users across those sites. So, if an advertiser sees their ad is being displayed on an Amazon page for shoes, then the advertiser can start showing shoe ads to that same user when they go to another website.
Social media collects user data not only to improve the user experience, but also for compelling business purposes. This makes me wonder: if data collection becomes more widespread, do we have appropriate control over what data is used? While users often implicitly agree to data collecting tactics when they sign up, this does not mean they fully understand how the data will be used. As a result, this behavior is not always desired and may occasionally go against the user's wishes.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
7.4. Responding to trolls?# One of the traditional pieces of advice for dealing with trolls is “Don’t feed the trolls,” which means that if you don’t respond to trolls, they will get bored and stop trolling. We can see this advice as well in the trolling community’s own “Rules of the Internet”: Do not argue with trolls - it means that they win But the essayist Film Crit Hulk argues against this in Don’t feed the trolls, and other hideous lies. That piece argues that the “don’t feed the trolls” strategy doesn’t stop trolls from harassing: Ask anyone who has dealt with persistent harassment online, especially women: [trolls stopping because they are ignored] is not usually what happens. Instead, the harasser keeps pushing and pushing to get the reaction they want with even more tenacity and intensity. It’s the same pattern on display in the litany of abusers and stalkers, both online and off, who escalate to more dangerous and threatening behavior when they feel like they are being ignored. Film Crit Hulk goes on to say that the “don’t feed the trolls” advice puts the burden on victims of abuse to stop being abused, giving all the power to trolls. Instead, Film Crit Hulk suggests giving power to the victims and using “skilled moderation and the willingness to kick people off platforms for violating rules about abuse”
The "don't feed the trolls" advice seems out of date now, especially given how frequent internet harassment is. I agree with Film Crit Hulk that ignoring trolls would just make them more aggressive. Instead than blaming victims, platforms should step up and implement stricter moderation to prevent trolls from worsening situations.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
7.2.2. Origins of Internet Trolling# We can trace Internet trolling to early social media in the 1980s and 1990s, particularly in early online message boards and in early online video games. In the early Internet message boards that were centered around different subjects, experienced users would “troll for newbies” by posting naive questions that all the experienced users were already familiar with. The “newbies” who didn’t realize this was a troll would try to engage and answer, and experienced users would feel superior and more part of the group knowing they didn’t fall for the troll like the “newbies” did. These message boards are where the word “troll” with this meaning comes from. One set of the early Internet-based video games were Multi-User Dungeons (MUDs), where you were given a text description of where you were and could say where to go (North, South, East, West) and text would tell you where you were next. In these games, you would come across other players and could type messages or commands to attack them. These were the precursors to more modern Massively multiplayer online role-playing games (MMORPGS). In these MUDs, players developed activities that we now consider trolling, such as “Griefing” where one player intentionally causes another player “grief” or distress (such as a powerful player finding a weak player and repeatedly killing the weak player the instant they respawn), and “Flaming” where a player intentionally starts a hostile or offensive conversation. In the 2000s, trolling went from an activity done in some communities to the creation of communities that centered around trolling such as 4chan (2003), Encyclopedia Dramatica (2004), and some forums on Reddit (2005).
It's funny how trolling began as a prank among experienced users to tease "newbies." However, the fact that it grew into something more poisonous, particularly with rules like 30 and 31, demonstrates how easily seemingly innocent habits can become detrimental, especially when they target certain groups such as women.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Inauthentic behavior is when the reality doesn’t match what is being presented. Inauthenticity has, of course, existed throughout human history, from Ea-nasir complaining in 1750 BCE that the copper he ordered was not the high quality he had been promised, to 1917 CE in England when Arthur Conan Doyle (the author of the Sherlock Holmes stories) was fooled by photographs that appeared to be of a child next to fairies
I feel like this issue is magnified in today's information environment. People portray themselves online, which often leads to a disconnect that can damage relationships and self-perception. We need more authentic interactions and more honesty between people.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
5.6.1. Social Media Connection Types# One difference you may notice with different social media sites is in how you form connections with others. Some social media sites don’t have any formal connections. Like two users who happen to be on the same bulletin board. Some social media sites only allow reciprocal connections, like being “friends” on Facebook Some social media sites offer one-way connections, like following someone on Twitter or subscribing to a YouTube channel. There are, of course, many variations and nuances besides what we mentioned above, but we wanted to get you started thinking about some different options.
Social media connection types include more than just reciprocal and one-way relationships. Reddit and Facebook Groups, for example, allow users to interact in groups based on mutual interests without the requirement for personal relationships. Another option is ad hoc or anonymous connections, such as Clubhouse or 4chan, which allow users to participate in short-term discussions without disclosing their personal identify. These various modes of connectivity strengthen social media interactions by responding to the requirements and interests of individual users.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
5.3.1. Weblogs (Blogs)# In the mid-1990s, some internet users started manually adding regular updates to the top of their personal websites (leaving the old posts below), using their sites as an online diary, or a (web) log of their thoughts. In 1998/1999, several web platforms were launched to make it easy for people to make and run blogs (e.g., LiveJournal and Blogger.com). With these blog hosting sites, it was much simpler to type up and publish a new blog entry, and others visiting your blog could subscribe to get updates whenever you posted a new post, and they could leave a comment on any of the posts.
This video discusses the rise of blogging in the late 1990s, which altered how people express themselves. People can quickly create content and communicate with their viewers using sites like LiveJournal and Blogger. For example, a photography enthusiast could post photographs and tips on a blog, and readers could subscribe and write comments to discuss them, resulting in a tiny community.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
The first way of combining data is by making a list. So we can make a list of the numbers from 1 to 10: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
For this example, it is easy to explain that if we have a numeric region. We need use a way that listing every number that will occur ensures that every possible outcome is accounted for. I knew that when I study INFO201 like R language and some imformatics knowledge.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Do I have enough money in my wallet to pay for the item? Does this tweet start with “hello” (meaning it is a greeting)?
Q1: Yes, like the example write"has_enough_money = money_in_wallet > cost_of_item", because the money in wallet bigger that cost of item, that means we still have money (has_enough_money). Q2:emmm IDK but it might does....
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Why do you think social media platforms allow bots to operate?
I believe social media platforms allow bots to function because they promote user engagement and streamline worker duties. Bots can automate tasks like customer care or content updates, saving time for both users and platforms. However, there is a downside: bots can be used to propagate misinformation or manipulate public opinion, complicating accountability and making it impossible to pursue criminal or unethical activity if it occurs.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
3.2.4. Registered vs. Unregistered bots# Most social media platforms provide an official way to connect a bot to their platform (called an Application Programming Interface, or API). This lets the social media platform track these registered bots and provide certain capabilities and limits to the bots (like a rate limit on how often the bot can post). But when some people want to get around these limits, they can make bots that don’t use this official API, but instead, open the website or app and then have a program perform clicks and scrolls the way a human might. These are much harder for social media platforms to track, and they normally ban accounts doing this if they are able to figure out that is what is happening.
I discovered the distinction between registered and unregistered bots: registered bots follow guidelines via application program interfaces (APIs), but unregistered bots present a difficulty to platform administration because they are not required to follow regulations. I'll be thinking about the impact of such bots on platform integrity, such as how social media companies can better identification of unregistered bots while protecting user privacy.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Deontological thinking comes out of the same era as Natural Rights thinking, and they are rooted in similar assumptions about the world. Deontology is often associated with Kant, because at that time, he gave us one of the first systematic, or comprehensive, interpretations of those ideas in a fully-fledged ethical framework. But deontological ethics does not need to be based on Kant’s ethics, and many ethicists working in the deontological tradition have suggested that reasoning about the objective reality should lead us to derive different sets of principles.
The description of Deontology can be expanded with Kant’s famous example about lying. He believed that even if lying could protect someone, it’s still wrong because the rule “it’s okay to lie” couldn’t be universalized without destroying trust in society. I agree with him—if everyone became dishonest to "protect others," the world would become colder and less compassionate, and trust between people would disappear. This shows that deontology values absolute duties over outcomes.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Why do you think the people who Kumail talked with didn’t have answers to his questions?
I understand the staff's reaction better, and perhaps the question is one to which few people can respond because it is so complex. As a matter of fact, since the 21st century, the emerging technology industry is developing rapidly and people do not prioritize ethical issues. How to generate greater value and wealth through these technologies is what they value more. R&D companies will focus more on technical feasibility and market potential, with little opportunity to discuss ethical issues or prepare for unintended consequences. So this is reflected in the fact that staff don't know how to respond when asked.
-