- Last 7 days
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Surveillance capitalism began when internet companies started tracking user behavior data to make their sites more personally tailored to users. These companies realized that this data was something that they could profit from, so they began to collect more data than strictly necessary (“behavioral surplus”) and see what more they could predict about users. Companies could then sell this data about users directly, or (more commonly), they could keep their data hidden, but use it to sell targeted advertisements. So, for example, Meta might let an advertiser say they want an ad to only go to people likely to be pregnant. Or they might let advertizes make ads go only to “Jew Haters” (which is ethically very bad, and something Meta allowed).
"Using behavioral data for tailored advertisements, while profitable, threatens user autonomy. When platforms like Meta use user data without explicit authorization, it seems invasive—I've seen how quickly advertising change depending on a single search or discussion. This type of tracking makes me feel as if my privacy is continually jeopardized, and it underlines the disparity between profit-driven businesses and user control."
-
Now that we’ve looked at what capitalism is, let’s pick a particular example of a social media company (Meta, which owns Facebook, Instagram, WhatsApp, etc.), and look at its decisions through a capitalism lens.
"Surveillance capitalism creates severe ethical concerns, particularly regarding the exploitation of 'behavioral surplus.' Reading about Meta's example of permitting harmful targeted ads reminded me of how I've received strangely particular adverts on Instagram, making me wonder how much the platform knows about me. It's distressing to learn how customer data can be abused, and businesses must accept responsibility for protecting against such behaviors."
-
- Nov 2024
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
18.2. Online Criticism and Shaming# While public criticism and shaming have always been a part of human culture, the Internet and social media have created new ways of doing so. We’ve seen examples of this before with Justine Sacco and with crowd harassment (particularly dogpiling). For an example of public shaming, we can look at late-night TV host Jimmy Kimmel’s annual Halloween prank, where he has parents film their children as they tell the parents tell the children that the parents ate all the kids’ Halloween candy. Parents post these videos online, where viewers are intended to laugh at the distress, despair, and sense of betrayal the children express. I will not link to these videos which I find horrible, but instead link you to these articles: Jimmy Kimmel’s Halloween prank can scar children. Why are we laughing? (archived copy) Jimmy Kimmel’s Halloween Candy Prank: Harmful Parenting? We can also consider events in the #MeToo movement as at least in part public shaming of sexual harassers (but also of course solidarity and organizing of victims of sexual harassment, and pushes for larger political, organizational, and social changes).
Shaming and verbal attacks on the internet can be really hurtful to others, so I think we all have an obligation to “watch our mouths” and not to express our opinions without thinking about the consequences, which can cause misunderstandings and harm.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
18.1. Shame vs. Guilt in childhood development
This reminds me of an incident from my childhood. I left the house and lost my keys, and my father happened to be traveling and couldn't return in time to drop them off. My mother was furious when she discovered that she had kicked me in front of others, which made me feel very embarrassed. I remember this experience vividly because many times I committed mistakes and my parents dealt with them in this manner, which caused me to acquire an avoidance mentality and a desire not to address family problems.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Do you believe crowd harassment is ever justified?
Crowd harassment is rarely justifiable. While public shaming can uncover societal faults, as Meghan Markle's situation demonstrated, it frequently leads to unchecked attacks and even stochastic terrorism. Organized raids or bogus crowds compound harm, while anonymity encourages irresponsibility. Addressing problems requires structural approaches rather than mob violence, which undermines justice and perpetuates suffering.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Have you experienced or witnessed harassment on social media (that you are willing to share about)?
I've observed bullying and dogpiling on social media, such as a buddy being bombarded with insults and threats after voicing a viewpoint. Another concern I've noticed is cyberstalking, which involves people creating bogus identities to track out others. The anonymity of social media intensifies the actions mentioned in the book, making harassment easier and more harmful to victims' mental health and safety.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
In what ways do you think you’ve participated in any crowdsourcing online? What do you think a social media company’s responsibility is for the crowd actions taken by users on its platform? Do you think there are ways a social media platform can encourage good crowdsourcing and discourage bad crowdsourcing?
-
I've contributed to forums, social media platforms, and Q&A sites by expressing my thoughts, answering questions, and participating in group knowledge conversations. These acts are examples of crowdsourcing, which relies on the contributions of many people to supply varied viewpoints and information.
-
Social media corporations should be held accountable for crowd activities on their platforms, ensuring that they do not promote harmful or deceptive activity. They must offer protections and content control to protect users from disinformation and online abuse, therefore preserving a healthy online community.
-
Platforms may encourage beneficial behavior by offering incentives for constructive contributions, while also adopting stringent content management and reporting mechanisms to reduce detrimental crowdsourcing. This encourages ethical, positive engagement among users.
-
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Location: Some forms of communication require you to be physically close, some allow you to be located anywhere with an internet signal. Time delay: Some forms of communication are almost instantaneous, some have small delays (you might see this on a video chat system), or have significant delays (like shipping a package). Synchronicity: Some forms of communication require both participants to communicate at the same time (e.g., video chat), while others allow the person to respond when convenient (like a mailed physical letter). Archiving: Some forms of communication automatically produce an archive of the communication (like a chat message history), while others do not (like an in-person conversation) Anonymity: Some forms of communication make anonymity nearly impossible (like an in-person conversation), while others make it easy to remain anonymous. -Audience: Communication could be private or public, and they could be one-way (no ability to reply), or two+-way where others can respond.
It's really important that we understand what we usually do of communication. like the paragraph said, there were lots of characteristics of means of communication. For example, this is my first time to know "synchronicity", everyone with a chat need to show their respect and patient.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
15.1.2. Untrained Staff# If you are running your own site and suddenly realize you have a moderation problem you might have some of your current staff (possibly just yourself) start handling moderation. As moderation is a very complicated and tricky thing to do effectively, untrained moderators are likely to make decisions they (or other users) regret.
When I'm in charge of content moderation and, due to limited resources or a sudden surge of problematic content, I delegate tasks to less experienced team members, mistakes are almost guaranteed. In these situations, I may make decisions that I or the users will regret later. Content moderation is a complex task that necessitates a thorough understanding of platform policies as well as sensitivity to ethical concerns.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
14.1.1. Quality Control# In order to make social media sites usable and interesting to users, they may ban different types of content such as advertisements, disinformation, or off-topic posts. Almost all social media sites (even the ones that claim “free speech”) block spam, mass-produced unsolicited messages, generally advertisements, scams, or trolling. Without quality control moderation, the social media site will likely fill up with content that the target users of the site don’t want, and those users will leave. What content is considered “quality” content will vary by site, with 4chan considering a lot of offensive and trolling content to be “quality” but still banning spam (because it would make the site repetitive in a boring way), while most sites would ban some offensive content.
Through my reading, I have learned that quality control is critical to maintaining the usability and user engagement of social media platforms. This chapter focuses on how platforms manage spam, disinformation, and some advertising. Different platforms have very different understandings of “quality,” with some platforms (like 4chan) allowing more offensive content than others, while there are also some platforms that are very strict about managing content outflows. I think without effective quality control, social media sites are likely to lose a portion of their loyal user base, as people value the quality and necessity of information.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
13.2.2. Trauma Dumping
Social media blurs the lines between personal and public worlds, which can lead to unintentional oversharing or context collapse—when a message intended for a certain group is viewed by others who are not prepared for its content. This portion made me consider the impact of social media design in this phenomena; platforms frequently promote "sharing" without prudence. Should social media firms provide more options to assist users evaluate their audience before publishing sensitive information?
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
13.1.1. Digital Detox?
The "Digital Detox" section explores the idea of withdrawing from social media owing to its possible negative impacts on mental health, but it also questions the notion that social media is intrinsically poisonous. This analogy, I believe, demonstrates how we frequently romanticize "simpler times" while ignoring the intricacies of earlier times, just as some individuals may overemphasize the bad aspects of digital environments while ignoring their benefits. In this regard, is it feasible that the problem stems from how we utilize social media rather than the network itself?
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
12.3.1. Replication (With Inheritance)# For social media content, replication means that the content (or a copy or modified version) gets seen by more people. Additionally, when a modified version gets distributed, future replications of that version will include the modification (a.k.a., inheritance). There are ways of duplicating that are built into social media platforms: Actions such as: liking, reposting, replying, and paid promotion get the original posting to show up for users more Actions like quote tweeting, or the TikTok Duet feature let people see the original content, but modified with new context. Social media sites also provide ways of embedding posts in other places, like in news articles There are also ways of replicating social media content that aren’t directly built into the social media platform, such as: copying images or text and reposting them yourself taking screenshots, and cross-posting to different sites
After reading this, I understood and learned how social media platforms can extend the reach of content through copying (including methods such as reposting, quoting tweets and embedding). These passages emphasize the platform-supported and user-driven ways of distributing and modifying content. But I continue to wonder, as platforms evolve and systems are optimized, will these copying mechanisms lead to accidental misinformation or out of context?
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Natural Selection Some characteristics make it more or less likely for an organism to compete for resources, survive, and make copies of itself
I studied this in biology class during my high school years, and what impressed me the most was the part of Natural selection, I found that just like Darwin's theory of evolution says, any living thing is the survival of the fittest following genetic optimization trying to survive from one generation to the next, and that's what our world is all about, it's brutal but also realistic.
-
- Oct 2024
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
When social media platforms show users a series of posts, updates, friend suggestions, ads, or anything really, they have to use some method of determining which things to show users. The method of determining what is shown to users is called a recommendation algorithm, which is an algorithm (a series of steps or rules, such as in a computer program) that recommends posts for users to see, people for users to follow, ads for users to view, or reminders for users. Some recommendation algorithms can be simple such as reverse chronological order, meaning it shows users the latest posts (like how blogs work, or Twitter’s “See latest tweets” option). They can also be very complicated taking into account many factors, such as:
This section illustrates how recommendation algorithms influence what we see online, balancing basic ways like displaying recent postings with more complicated aspects like interactions and geography. It's fascinating how these algorithms impact our decisions and habits without our knowledge.However, it raises issues about privacy and exploitation, particularly when suggestions are based on data gathered without explicit agreement. These algorithms can also produce echo chambers, which limit the diversity of viewpoints. Should social media sites be more open about how their algorithms function, or will this lead to new problems, such as individuals attempting to abuse them?
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
11.2.1. Individual vs. Systemic Analysis# Individual analysis focuses on the behavior, bias, and responsibility an individual has, while systemic analysis focuses on the how organizations and rules may have their own behaviors, biases, and responsibility that aren’t necessarily connected to what any individual inside intends. For example, there were differences in US criminal sentencing guidelines between crack cocaine vs. powder cocaine in the 90s. The guidelines suggested harsher sentences on the version of cocaine more commonly used by Black people, and lighter sentences on the version of cocaine more commonly used by white people. Therefore, when these guidelines were followed, they had have racially biased (that is, racist) outcomes regardless of intent or bias of the individual judges. (See: https://en.wikipedia.org/wiki/Fair_Sentencing_Act).
This section illustrates how recommendation algorithms impact what we see online, balancing basic tactics like displaying recent articles with more complicated aspects like interactions and geography. It's fascinating how these algorithms impact our decisions and actions without our knowledge. However, it creates privacy and manipulation problems, particularly when suggestions are made based on data gathered without explicit authorization. These algorithms can also generate echo chambers, restricting varied perspectives. Should social media sites be more open about how their algorithms function, or will this lead to more issues, such as individuals attempting to exploit them?
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
10.1. Disability
This section encourages us to evaluate disability as a social construct shaped by preconceived beliefs about what abilities are "normal." For example, trichromats are not considered "disabled," despite the fact that they lack the additional color sensitivity that tetrachromats do. It serves as a powerful reminder that disabilities are frequently imposed barriers rather than inherent limitations. This makes me wonder how much more inclusive our environments could be if they accommodated a broader spectrum of talents from the outset.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
10.2.1. Coping Strategies
After reading this, I realized that coping mechanisms such as masking place the whole weight on people with impairments to adapt to situations that were not meant for them. However, I believe it underscores a wider issue: when systems are inaccessible, disabled people are unfairly expected to find ways to fit in, often at considerable personal cost. And I'm wondering if there's a better way for society to support people with invisible disabilities so they don't feel compelled to hide or adapt alone.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
There are many reasons, both good and bad, that we might want to keep information private.
Recent social media leaks show how easy private material can propagate, even when context collapse can result in unintentional exposure of private chats. Platforms must guarantee improved privacy measures since the boundaries between private and public online interactions are becoming increasingly hazy.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
While we have our concerns about the privacy of our information, we often share it with social media platforms under the understanding that they will hold that information securely. But social media companies often fail at keeping our information secure.
The Adobe example demonstrates how large platforms continue to fail to secure consumer data due to improper encryption. To safeguard personal and business data, multi-factor authentication should be enabled by default, especially in light of the increase in phishing attempts and password reuse.
-
While we have our concerns about the privacy of our information, we often share it with social media platforms under the understanding that they will hold that information securely. But social media companies often fail at keeping our information secure.
Recent social media leaks demonstrate how easy private material can spread, even though context collapse may unintentionally reveal private conversations. Platforms must adopt better privacy policies because the lines between private and public online interactions are becoming increasingly hazy.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Data can be poisoned intentionally as well. For example, in 2021, workers at Kellogg’s were upset at their working conditions, so they agreed to go on strike, and not work until Kellogg’s agreed to improve their work conditions. Kellogg’s announced that they would hire new workers to replace the striking workers: Kellogg’s proposed pay and benefits cuts while forcing workers to work severe overtime as long as 16-hour-days for seven days a week. Some workers stayed on the job for months without a single day off. The company refuses to meet the union’s proposals for better pay, hours, and benefits, so they went on strike. Earlier this week, the company announced it would permanently replace 1,400 striking workers. People Are Spamming Kellogg’s Job Applications in Solidarity with Striking Workers – Vice MotherBoard People in the antiwork subreddit found the website where Kellogg’s posted their job listing to replace the workers. So those Redditors suggested they spam the site with fake applications, poisoning the job application data, so Kellogg’s wouldn’t be able to figure out which applications were legitimate or not (we could consider this a form of trolling). Then Kellogg’s wouldn’t be able to replace the striking workers, and they would have to agree to better working conditions. Then Sean Black, a programmer on TikTok saw this and decided to contribute by creating a bot that would automatically log in and fill out applications with random user info, increasing the rate at which he (and others who used his code) could spam the Kellogg’s job applications:
The Kellogg's incident, in which the general public used electronic means to interfere with the company's operations, is a fascinating illustration of collective resistance in the digital age. It, in my opinion, represents society's reaction to large businesses abusing their power. But this kind of "data poisoning" begs the moral dilemma of how to define an appropriate mode of dissent. Although flooding the hiring process with phony applications paralyzes it, it may have unintended consequences for other parties, therefore a deeper analysis of the boundaries of technical confrontation is required.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Social media platforms collect various types of data on their users. Some data is directly provided to the platform by the users. Platforms may ask users for information like: email address name profile picture interests friends Platforms also collect information on how users interact with the site. They might collect information like (they don’t necessarily collect all this, but they might): when users are logged on and logged off who users interact with What users click on what posts users pause over where users are located what users send in direct messages to each other Online advertisers can see what pages their ads are being requested on, and track users across those sites. So, if an advertiser sees their ad is being displayed on an Amazon page for shoes, then the advertiser can start showing shoe ads to that same user when they go to another website.
Social media collects user data not only to improve the user experience, but also for compelling business purposes. This makes me wonder: if data collection becomes more widespread, do we have appropriate control over what data is used? While users often implicitly agree to data collecting tactics when they sign up, this does not mean they fully understand how the data will be used. As a result, this behavior is not always desired and may occasionally go against the user's wishes.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
7.4. Responding to trolls?# One of the traditional pieces of advice for dealing with trolls is “Don’t feed the trolls,” which means that if you don’t respond to trolls, they will get bored and stop trolling. We can see this advice as well in the trolling community’s own “Rules of the Internet”: Do not argue with trolls - it means that they win But the essayist Film Crit Hulk argues against this in Don’t feed the trolls, and other hideous lies. That piece argues that the “don’t feed the trolls” strategy doesn’t stop trolls from harassing: Ask anyone who has dealt with persistent harassment online, especially women: [trolls stopping because they are ignored] is not usually what happens. Instead, the harasser keeps pushing and pushing to get the reaction they want with even more tenacity and intensity. It’s the same pattern on display in the litany of abusers and stalkers, both online and off, who escalate to more dangerous and threatening behavior when they feel like they are being ignored. Film Crit Hulk goes on to say that the “don’t feed the trolls” advice puts the burden on victims of abuse to stop being abused, giving all the power to trolls. Instead, Film Crit Hulk suggests giving power to the victims and using “skilled moderation and the willingness to kick people off platforms for violating rules about abuse”
The "don't feed the trolls" advice seems out of date now, especially given how frequent internet harassment is. I agree with Film Crit Hulk that ignoring trolls would just make them more aggressive. Instead than blaming victims, platforms should step up and implement stricter moderation to prevent trolls from worsening situations.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
7.2.2. Origins of Internet Trolling# We can trace Internet trolling to early social media in the 1980s and 1990s, particularly in early online message boards and in early online video games. In the early Internet message boards that were centered around different subjects, experienced users would “troll for newbies” by posting naive questions that all the experienced users were already familiar with. The “newbies” who didn’t realize this was a troll would try to engage and answer, and experienced users would feel superior and more part of the group knowing they didn’t fall for the troll like the “newbies” did. These message boards are where the word “troll” with this meaning comes from. One set of the early Internet-based video games were Multi-User Dungeons (MUDs), where you were given a text description of where you were and could say where to go (North, South, East, West) and text would tell you where you were next. In these games, you would come across other players and could type messages or commands to attack them. These were the precursors to more modern Massively multiplayer online role-playing games (MMORPGS). In these MUDs, players developed activities that we now consider trolling, such as “Griefing” where one player intentionally causes another player “grief” or distress (such as a powerful player finding a weak player and repeatedly killing the weak player the instant they respawn), and “Flaming” where a player intentionally starts a hostile or offensive conversation. In the 2000s, trolling went from an activity done in some communities to the creation of communities that centered around trolling such as 4chan (2003), Encyclopedia Dramatica (2004), and some forums on Reddit (2005).
It's funny how trolling began as a prank among experienced users to tease "newbies." However, the fact that it grew into something more poisonous, particularly with rules like 30 and 31, demonstrates how easily seemingly innocent habits can become detrimental, especially when they target certain groups such as women.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Inauthentic behavior is when the reality doesn’t match what is being presented. Inauthenticity has, of course, existed throughout human history, from Ea-nasir complaining in 1750 BCE that the copper he ordered was not the high quality he had been promised, to 1917 CE in England when Arthur Conan Doyle (the author of the Sherlock Holmes stories) was fooled by photographs that appeared to be of a child next to fairies
I feel like this issue is magnified in today's information environment. People portray themselves online, which often leads to a disconnect that can damage relationships and self-perception. We need more authentic interactions and more honesty between people.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
5.6.1. Social Media Connection Types# One difference you may notice with different social media sites is in how you form connections with others. Some social media sites don’t have any formal connections. Like two users who happen to be on the same bulletin board. Some social media sites only allow reciprocal connections, like being “friends” on Facebook Some social media sites offer one-way connections, like following someone on Twitter or subscribing to a YouTube channel. There are, of course, many variations and nuances besides what we mentioned above, but we wanted to get you started thinking about some different options.
Social media connection types include more than just reciprocal and one-way relationships. Reddit and Facebook Groups, for example, allow users to interact in groups based on mutual interests without the requirement for personal relationships. Another option is ad hoc or anonymous connections, such as Clubhouse or 4chan, which allow users to participate in short-term discussions without disclosing their personal identify. These various modes of connectivity strengthen social media interactions by responding to the requirements and interests of individual users.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
5.3.1. Weblogs (Blogs)# In the mid-1990s, some internet users started manually adding regular updates to the top of their personal websites (leaving the old posts below), using their sites as an online diary, or a (web) log of their thoughts. In 1998/1999, several web platforms were launched to make it easy for people to make and run blogs (e.g., LiveJournal and Blogger.com). With these blog hosting sites, it was much simpler to type up and publish a new blog entry, and others visiting your blog could subscribe to get updates whenever you posted a new post, and they could leave a comment on any of the posts.
This video discusses the rise of blogging in the late 1990s, which altered how people express themselves. People can quickly create content and communicate with their viewers using sites like LiveJournal and Blogger. For example, a photography enthusiast could post photographs and tips on a blog, and readers could subscribe and write comments to discuss them, resulting in a tiny community.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
The first way of combining data is by making a list. So we can make a list of the numbers from 1 to 10: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
For this example, it is easy to explain that if we have a numeric region. We need use a way that listing every number that will occur ensures that every possible outcome is accounted for. I knew that when I study INFO201 like R language and some imformatics knowledge.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Do I have enough money in my wallet to pay for the item? Does this tweet start with “hello” (meaning it is a greeting)?
Q1: Yes, like the example write"has_enough_money = money_in_wallet > cost_of_item", because the money in wallet bigger that cost of item, that means we still have money (has_enough_money). Q2:emmm IDK but it might does....
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Why do you think social media platforms allow bots to operate?
I believe social media platforms allow bots to function because they promote user engagement and streamline worker duties. Bots can automate tasks like customer care or content updates, saving time for both users and platforms. However, there is a downside: bots can be used to propagate misinformation or manipulate public opinion, complicating accountability and making it impossible to pursue criminal or unethical activity if it occurs.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
3.2.4. Registered vs. Unregistered bots# Most social media platforms provide an official way to connect a bot to their platform (called an Application Programming Interface, or API). This lets the social media platform track these registered bots and provide certain capabilities and limits to the bots (like a rate limit on how often the bot can post). But when some people want to get around these limits, they can make bots that don’t use this official API, but instead, open the website or app and then have a program perform clicks and scrolls the way a human might. These are much harder for social media platforms to track, and they normally ban accounts doing this if they are able to figure out that is what is happening.
I discovered the distinction between registered and unregistered bots: registered bots follow guidelines via application program interfaces (APIs), but unregistered bots present a difficulty to platform administration because they are not required to follow regulations. I'll be thinking about the impact of such bots on platform integrity, such as how social media companies can better identification of unregistered bots while protecting user privacy.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Deontological thinking comes out of the same era as Natural Rights thinking, and they are rooted in similar assumptions about the world. Deontology is often associated with Kant, because at that time, he gave us one of the first systematic, or comprehensive, interpretations of those ideas in a fully-fledged ethical framework. But deontological ethics does not need to be based on Kant’s ethics, and many ethicists working in the deontological tradition have suggested that reasoning about the objective reality should lead us to derive different sets of principles.
The description of Deontology can be expanded with Kant’s famous example about lying. He believed that even if lying could protect someone, it’s still wrong because the rule “it’s okay to lie” couldn’t be universalized without destroying trust in society. I agree with him—if everyone became dishonest to "protect others," the world would become colder and less compassionate, and trust between people would disappear. This shows that deontology values absolute duties over outcomes.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Why do you think the people who Kumail talked with didn’t have answers to his questions?
I understand the staff's reaction better, and perhaps the question is one to which few people can respond because it is so complex. As a matter of fact, since the 21st century, the emerging technology industry is developing rapidly and people do not prioritize ethical issues. How to generate greater value and wealth through these technologies is what they value more. R&D companies will focus more on technical feasibility and market potential, with little opportunity to discuss ethical issues or prepare for unintended consequences. So this is reflected in the fact that staff don't know how to respond when asked.
-