41 Matching Annotations
  1. Mar 2025
  2. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Luddite. December 2023. Page Version ID: 1189255462. URL: https://en.wikipedia.org/w/index.php?title=Luddite&oldid=1189255462 (visited on 2023-12-10)

      Luddites was a member of the 19th-century British textile workers' movement, who opposed the use of cost-reducing machines, believed that these machines would replace skilled workers, reduce wages, and produce inferior goods. They protest the unfair practices of manufacturers by secretly destroying machines. The movement emerged in Nottingham in 1811 and expanded to northern England and Yorkshire. The factory owners fired a gun and eventually the government calmed down the movement through legal and military means, including executions and exile participants.

    1. In the first chapter of our book we quoted actor Kumail Nanjiani on tech innovators’ lack of consideration of ethical implications of their work. Of course, concerns about the implications of technological advancement are nothing new. In Plato’s Phaedrus [u1] (~370BCE), Socrates tells (or makes up[1]) a story from Egypt critical of the invention of writing: Now in those days the god Thamus was the king of the whole country of Egypt, […] [then] came Theuth and showed his inventions, desiring that the other Egyptians might be allowed to have the benefit of them; […] [W]hen they came to letters, This, said Theuth, will make the Egyptians wiser and give them better memories; it is a specific both for the memory and for the wit. Thamus replied: […] this discovery of yours will create forgetfulness in the learners’ souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. The specific which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality. In England in the early 1800s, Luddites [u2] were upset that textile factories were using machines to replace them, leaving them unemployed, so they sabotaged the machines. The English government sent soldiers to stop them, killing and executing many. (See also Sci-Fi author Ted Chiang on Luddites and AI [u3])

      Socrates' criticism of writing impressed me. He believes that writing will weaken people's memory and make people "seemingly wise, but actually ignorant." But nowadays, it is hard for us to imagine a world without words. Writing not only does not destroy wisdom, but has become the cornerstone of knowledge dissemination and civilization progress. This made me think whether our concerns about new technologies (such as AI and social media) today are also as pessimistic as Socrates's view back then?

  3. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Margaret Kohn and Kavita Reddy. Colonialism. In Edward N. Zalta and Uri Nodelman, editors, The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University, spring 2023 edition, 2023. URL: https://plato.stanford.edu/archives/spr2023/entries/colonialism/ (visited on 2023-12-10).

      This article mainly introduces the concept, historical development and related theories of colonialism, and explores the criticism and defense of colonialism by different schools of thought. It reveals that colonialism is not only an economic and political issue, but also a profound impact on cultural and identity, and continues to spark discussion in the contemporary world.

    1. Colonialism in Tech# The tech industry is full of colonialist thinking and practices, some more subtle than others. To begin with, much of the tech industry is centralized geographically, specifically in Silicon Valley, San Francisco, California. The leaders and decisions in how tech operates come out of this one wealthy location in a wealthy nation. Then, much of tech is dependent on exploiting cheap labor, often in dangerous conditions, in other countries (thus extracting the resource of cheap labor, from places with “inferior” governments and economies). This labor might be physical labor, or dealing with dangerous chemicals, or the content moderators who deal with viewing horrific online content. Tech industry leaders in Silicon Valley then take what they made with exploited labor, and sell it around the world, feeling good about themselves, believing they are benefitting the world with their “superior” products. 20.2.1. Example: One Laptop Per Child# An example of how this can play out is the failed One Laptop Per Child [t24] (OLPC) project. In late 2005, tech visionary and MIT Media Lab founder Nicholas Negroponte [introduced a] $100 laptop would have all the features of an ordinary computer but require so little electricity that a child could power it with a hand crank OLPC’s $100 laptop was going to change the world — then it all went wrong [t25] OLPC wanted to give every child in the world a laptop, so they could learn computers, believing he would benefit the world. But this project failed for a number of reasons, such as: The physical device didn’t work well. The hand-powered generator was unreliable, the screen too small to read. OLPC was not actually providing a “superior” product to the rest of the world. When they did hand out some, it didn’t come with good instructions. Kids were just supposed to figure it out on their own. If this failed, it must be the fault of the poor people around the world. It wasn’t designed for what kids around the world would actually want. They didn’t take input from actual kids around the world. OLPC thought they had superior knowledge and just assumed they knew what people would want. In the end, this project fell apart, and most of tech moved on to whatever next big idea to save the world.

      The article points out that global technological decisions are mainly concentrated in Silicon Valley (near San Francisco, USA), and this region is far more wealthy than most parts of the world. This has led to technological development being dominated by elites from a handful of wealthy countries who see the products and technologies they create as “advanced” and the rest of the world as “backward” audiences.

  4. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Merriam-Webster. Definition of CAPITALISM. December 2023. URL: https://www.merriam-webster.com/dictionary/capitalism (visited on 2023-12-10).

      This article mainly summarizes the relationships and differences between Communism, Socialism, Capitalism and Democracy, and explains why these concepts are prone to confusion.

    1. Related Terms# Here are a few more terms that are relevant to capitalism that we need to understand in order to get to the details of decision-making and strategies employed by social media companies. Shares / Stocks Shares or stocks are ownership of a percentage of a business, normally coming with getting a percentage of the profits and a percentage of power in making business decisions. Companies then have a board of directors who represent these shareholders. The board is in charge of choosing who runs the company (the CEO). They have the power to hire and fire CEOs For example: in 1985, the board of directors for Apple Computers denied Steve Jobs [s3] (co-founded Apple) the position of CEO and then they fired him completely CEOs of companies (like Mark Zuckerberg of Meta) are often both wage-laborers (they get a salary, Zuckerberg gets a tiny symbolic $1/year) and shareholders (they get a share of the profits, Zuckerberg owns 16.8%) Free Market [s4] Businesses set their own prices and customers decide what they are willing to pay, so prices go up or down as each side decides what they are willing to charge/spend (no government intervention) See supply and demand [s5] What gets made is theoretically determined by what customers want to spend their money on, with businesses competing for customers by offering better products and better prices Especially the people with the most money, both business owners and customers Monopoly [s6] “a situation where a specific person or enterprise is the only supplier of a particular thing” Monopolies are considered anti-competitive (though not necessarily anti-capitalist). Businesses can lower quality and raise prices, and customers will have to accept those prices since there are no alternatives. Cornering a market [s7] is being close enough to a monopoly to mostly set the rules (e.g., Amazon and online shopping) 19.1.2. Socialism# Let’s contrast capitalism with socialism: Socialism [s8], in contrast is a system where: A government owns the businesses (sometimes called “government services”) A government decides what to make and what the price is the price might be free, like with public schools, public streets and highways, public playgrounds, etc. A government then may hire wage laborers [s2] at predetermined rates for their work, and the excess business profits or losses are handled by the government For example, losses are covered by taxes, and excess may pay for other government services or go directly to the people (e.g., Alaska uses its oil profits to pay people to live there [s9]). As an example, there is one Seattle City Sewer system, which is run by the Seattle government. Having many competing sewer systems could actually make a big mess of the underground pipe system. 19.1.3. Accountability in Capitalism and other systems# Let’s look at who the leaders of businesses (or services) are accountable for in capitalism and other systems. Democratic Socialism (i.e., “Socialists[1]”)# With socialism in a representative democracy (i.e., “democratic socialism”), the government leaders are chosen by the people through voting. And so, while the governmental leaders are in charge of what gets made, how much it costs, and who gets it, those leaders are accountable to the voters. So, in a democratic socialist government, theoretically, every voter has an equal say in business (or government service) decisions. Note, that there are limitations to the government leaders being accountable to the people their decisions affect, such as government leaders ignoring voters’ wishes, or people who can’t vote (e.g., the young, non-citizens, oppressed minorities) and therefore don’t get a say. Capitalism# In capitalism, business decisions are accountable to the people who own the business. In a publicly traded [s10] business, that is the shareholders. The more money someone has invested in a company, the more say they have. And generally in a capitalist system, the rich have the most say in what happens (both as business owners and customers), and the poor have very little say in what happens. When shareholders buy stocks in a company, they are owed a percentage of the profits. Therefore it is the company leaders’ fiduciary duty [s11] to maximize the profits of the company (called the Friedman Doctrine [s12]). If the leader of the company (the CEO) intentionally makes a decision that they know will reduce the company’s profits, then they are cheating the shareholders out of money the shareholders could have had. CEOs mistakenly do things that lose money all the time, but doing so on purpose is a violation of fiduciary duty. There are many ways a CEO might intentionally lower profits unfairly, such as by having their company pay more than necessary when buying something from the CEO’s friend’s company. But even if a CEO decides to reduce profits for a good reason (e.g., it may be unethical to overwork the employees), then they are still violating their fiduciary duty, and the board of directors might fire them or pressure them into prioritizing profits above all else. For example, the actor Stellan Skarsgård complained that in the film industry, it didn’t matter if a company was making good movies at a decent profit. If there is an opportunity for even more profit by making worse movies, then that is what business leaders are obligated to do: “When raw market forces come in, [movie] studios start being run by companies that don’t care if they’re dealing in films or toothpaste so long as they get their 10% [return]. When AT&T took over Time Warner, it immediately told HBO to become lighter and more commercial. They were always making money. But not enough for an investor.” Stellan Skarsgård [s13] Or as another example, if the richest man in the world offers to buy out a social media site for more than it’s worth [s14], then it is the fiduciary duty of the leaders of the social media site to accept that offer. It doesn’t matter if it is clear that this rich man doesn’t know what he is doing and is likely to destroy the social media site, and potentially cause harm to society at large; the fiduciary duty of the company leaders is to get as much money as possible to their shareholders, and they can’t beat being overpaid by the richest man in the world. Rejecting that deal would be cheating the stockholders out of money. CEOs of social media companies, under pressure from the board of directors, might also make decisions that prioritize short-term profits for the shareholders over long-term benefits, leading to what author Corey Doctorow calls the “Enshittification” of platforms (See his article: The ‘Enshittification’ of TikTok: Or how, exactly, platforms die. [s15]. Privately owned [s16] businesses or organizations are a little different in that the owner (or owners) have full say on what happens, and are free to make it as unprofitable or profitable as they want. Though, if the private ownership of the business was purchased with loans [s17], then they have some responsibilities to the lenders. Other Accountability Models# Besides the privately owned and publicly traded businesses in capitalism, and government services in socialism, there are other accountability models as well. For example: In a publicly funded organization, non-profit organization, or crowd-funded project (e.g., Wikipedia [s18], NPR [s19], Kickstarter projects [s20], Patreon creators [s21], charities), the investors (or donors) are not investing in profits from the organization, but instead are investing in the product or work the organization does. Therefore the responsibility to investors is not to make profits but to do the work investors are paying for. In this model, the more money someone invests or donates, the more say they have over what the organization does (like capitalism and unlike democratic socialism). For example, when buying groceries, you might be prompted to let the grocery store take an extra $5 from you to give to a charity that gives food to the needy. Then the grocery store corporation will give $5 to the charity and look good for doing so. But the corporation also gets $5 more say in how the charity operates (and they can pressure the charity to not do anything that hurts the corporation’s profits, and thus look charitable without violating their fiduciary duty)[2]. In a consumer co-operative [s22] businesses and organizations, the customers of the business have a say in how the business is run, and therefore the leaders are accountable to the customers. So if the customers want the business to do something that can only be done by treating the employees poorly, then the business leaders are obligated to follow the customer’s demands. If the company makes excess profits, that money is sent out to the customers. An example of a consumer co-operative is the outdoor recreation gear store REI [s23]. In a worker co-operative [s24] businesses and organizations, the employees at the company are the people who have a say in how the business is run, and therefore the leaders are accountable to the employees (rather than vice-versa). Since the business leaders are controlled by the workers, this is a system where the workers control the means of production [s25] (e.g., they control the factories, offices or other business resources). If the business makes excess profits, that money is sent out to the employees. 19.1.4. Reflection Questions# In what ways do you see capitalism, socialism, and other funding models show up in the country you are from or are living in? [1] Advocates for socialism are often referring to democratic socialism. This is different than “National Socialists” which is shortened to “Nazi [s26],” and is a form of Fascism [s27]. While Nazis do have government leaders deciding on what gets made and who gets it, that is because it is a totalitarian dictatorship run by one person (e.g., Adolf Hitler), who is accountable to no one [s28]. Nazis historically have been very opposed to socialism [s29], and have a mixed relationship with capitalism [s30]. What Nazis are primarily concerned with is “cleansing” [s31] their nation from people who aren’t part of the “true people” [s32]. So their goal is to identify all the “undesirable” people (e.g., Jewish people, Roma people, queer people, disabled people, etc.), and take their stuff, and then deport or murder them in a genocide. From an ethics perspective, we’d like to state that Nazis (and other Fascists) are very bad. [2] To give a concrete example: Albertsons Companies, Inc., which owns of some of the largest US grocery store chains (Albertsons, Safeway, ACME, etc.) [s33], created their own non-profit charity organization Nourishing Neighbors [s34], which they donate to and they ask customers to donate to at checkout. Then Albertsons’ Nourishing Neighbors non-profit can donate to local food charities of their own selection, and exclude “political organizations or activities” and “advocacy programs” [s35]. This doesn’t mean that these charities and the people in them aren’t doing good things, but the fiduciary duty of Albertsons Companies, Inc. and corporate control over Nourishing Neighbors will limit, for better or worse, what things this money goes toward. { requestKernel: true, binderOptions: { repo: "binder-examples/jupyter-stacks-datascience", ref: "master", }, codeMirrorConfig: { theme: "abcdef", mode: "python" }, kernelOptions: { name: "python3", path: "./ch19_capitalism" }, predefinedOutput: true } kernelName = 'python3'

      The description of socialism in the article emphasizes the government's control of resources and production, and takes public services (such as education, roads, sewers, etc.) as examples. This model can avoid certain disadvantages brought by capitalism, such as market failure and excessive gap between the rich and the poor. But its problem is government efficiency and accountability: Can the government really manage the economy efficiently? Will the bureaucracy hinder innovation? In democratic socialist countries, governments are responsible for voters, but there may still be problems of abuse of power or inefficiency.

  5. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Guilt–shame–fear spectrum of cultures. November 2023. Page Version ID: 1184808072. URL: https://en.wikipedia.org/w/index.php?title=Guilt%E2%80%93shame%E2%80%93fear_spectrum_of_cultures&oldid=1184808072 (visited on 2023-12-10).

      This article discusses the classification of guilt culture, shame culture and fear culture in cultural anthropology, and explores how these cultures maintain social order and behavioral norms through different emotional mechanisms.

    1. # Before we talk about public criticism and shaming and adults, let’s look at the role of shame in childhood. In at least some views about shame and childhood[1], shame and guilt hold different roles in childhood development [r1]: Shame is the feeling that “I am bad,” and the natural response to shame is for the individual to hide, or the community to ostracize the person. Guilt is the feeling that “This specific action I did was bad.” The natural response to feeling guilt is for the guilty person to want to repair the harm of their action. In this view [r1], a good parent might see their child doing something bad or dangerous, and tell them to stop. The child may feel shame (they might not be developmentally able to separate their identity from the momentary rejection). The parent may then comfort the child to let the child know that they are not being rejected as a person, it was just their action that was a problem. The child’s relationship with the parent is repaired, and over time the child will learn to feel guilt instead of shame and seek to repair harm instead of hide.

      What impressed me the most was that shame makes individuals want to hide themselves, while guilt causes individuals to fix their mistakes. This does reflect the emotional patterns of many people in real life. If a person is often humiliated from childhood rather than being educated to face his mistakes, they may habitually choose to escape rather than actively correct their behavior. This reminds me that many adults subconsciously avoid criticism or failure rather than reflect and improve, which may be related to their childhood experiences.

  6. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Constance Grady. Chrissy Teigen’s fall from grace. Vox, June 2021. URL:

      This source explores the rise and fall of Chrissy Teigen on the Twitter platform and how it has changed dramatically over the past decade. The article points out that Teigen was popular early on for his straightforwardness and humor on Twitter, becoming a "relatable" public figure. However, over time, the social media environment has changed, and some of Teigen’s past inappropriate remarks have been revisited, especially her cyberbullying behavior towards Courtney Stodden, triggering a strong public reaction, resulting in damage to her public image. This article delves into Teigen’s case, showing how public figures on social media go from being sought after to being strongly opposed in a short period of time, and the changes in social media culture and public expectations reflected behind this.

    1. Individual harassment (one individual harassing another individual) has always been part of human cultures, bur social media provides new methods of doing so. There are many methods by which through social media. This can be done privately through things like: Bullying: like sending mean messages through DMs Cyberstalking: Continually finding the account of someone, and creating new accounts to continue following them. Or possibly researching the person’s physical location. Hacking: Hacking into an account or device to discover secrets, or make threats. Tracking: An abuser might track the social media use of their partner or child to prevent them from making outside friends. They may even install spy software on their victim’s phone. Death threats / rape threats Etc. Individual harassment can also be done publicly before an audience (such as classmates or family). For example: Bullying: like posting public mean messages Impersonation: Making an account that appears to be from someone and having that account say things to embarrass or endanger the victim. Doxing [q1]: Publicly posting identifying information about someone (e.g., full name, address, phone number, etc.). Revenge porn / deep-fake porn Etc.

      This part of the content made me realize that while social media brings convenience, it also provides more ways to harass and invade privacy. Compared with traditional face-to-face bullying, harassment on social media is more concealed, more persistent, faster, and may have a more serious impact. In particular, methods such as "tracking, hacking, deepfake, human flesh search" may make the victim lose his sense of security and even affect social, work and mental health in real life.

  7. Feb 2025
  8. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Crowdsourcing. December 2023. Page Version ID: 1188348631. URL: https://en.wikipedia.org/w/index.php?title=Crowdsourcing&oldid=1188348631#Historical_examples (visited on 2023-12-08). [p5] WIRED. How to Not Embarrass Yourself in Front of the Robot at Work. September 2015

      This article introduces the development history, application areas and its impact in different disciplines. From Tang Dynasty's AG to modern Wikipedia, Kickstarter, Amazon Mechanical Turk, etc., crowdsourcing has been used to solve complex problems, collect data and drive innovation. It is widely used in business, scientific research, government policy, art, linguistics and other fields. For example, NASA uses crowdsourcing to analyze space pictures, governments use citizens to formulate policies, and even linguistic research uses social media to crowdsourcing to collect dialects. Data demonstrates the importance of crowdsourcing in modern society.

    1. When tasks are done through large groups of people making relatively small contributions, this is called crowdsourcing. The people making the contributions generally come from a crowd of people that aren’t necessarily tied to the task (e.g., all internet users can edit Wikipedia), but then people from the crowd either get chosen to participate, or volunteer themselves. When a crowd is providing financial contributions, that is called crowdfunding (e.g., patreon [p1], kickstarter [p2], gofundme [p3]). Humans have always collaborated on tasks, and crowds have been enlisted in performing tasks long before the internet existed [p4]. What social media (and other internet systems) have done is expand the options for how people can collaborate on tasks. 16.1.1. Different Ways of Collaborating and Communicating# There have been many efforts to use computers to replicate the experience of communicating with someone in person, through things like video chats, or even telepresence robots [p5]]. But there are ways that attempts to recreate in-person interactions inevitably fall short and don’t feel the same. Instead though, we can look at different characteristics that computer systems can provide, and find places where computer-based communication works better, and is Beyond Being There [p6] (pdf here [p7]). Some of the different characteristics that means of communication can have include (but are not limited to): Location: Some forms of communication require you to be physically close, some allow you to be located anywhere with an internet signal. Time delay: Some forms of communication are almost instantaneous, some have small delays (you might see this on a video chat system), or have significant delays (like shipping a package). Synchronicity: Some forms of communication require both participants to communicate at the same time (e.g., video chat), while others allow the person to respond when convenient (like a mailed physical letter). Archiving: Some forms of communication automatically produce an archive of the communication (like a chat message history), while others do not (like an in-person conversation) Anonymity: Some forms of communication make anonymity nearly impossible (like an in-person conversation), while others make it easy to remain anonymous. -Audience: Communication could be private or public, and they could be one-way (no ability to reply), or two+-way where others can respond. Because of these (and other) differences, different forms of communication might be preferable for different tasks. For example, you might send an email to the person sitting next at work to you if you want to keep an archive of the communication (which is also conveniently grouped into email threads). Or you might send a text message to the person sitting next to you if you are criticizing the teacher, but want to do so discretely, so the teacher doesn’t notice.

      This article made me realize that the Internet does not simply “copy” offline communication modes, but creates more dimensional communication methods that can more efficiently support group collaboration and information sharing.

  9. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Wikipedia:Wikipedians. November 2023. Page Version ID: 1184672006. URL: https://en.wikipedia.org/w/index.php?title=Wikipedia:Wikipedians&oldid=1184672006 (visited on 2023-12-08).

      It talks about what Wikipedia is and what it is composed of.

    1. 15.1.1. No Moderators# Some systems have no moderators. For example, a personal website that can only be edited by the owner of the website doesn’t need any moderator set up (besides the person who makes their website). If a website does let others contribute in some way, and is small, no one may be checking and moderating it. But as soon as the wrong people (or spam bots) discover it, it can get flooded with spam, or have illegal content put up (which could put the owner of the site in legal jeopardy). 15.1.2. Untrained Staff# If you are running your own site and suddenly realize you have a moderation problem you might have some of your current staff (possibly just yourself) start handling moderation. As moderation is a very complicated and tricky thing to do effectively, untrained moderators are likely to make decisions they (or other users) regret. 15.1.3. Dedicated Moderation Teams# After a company starts working on moderation, they might decide to invest in teams specifically dedicated to content moderation. These teams of content moderators could be considered human computers hired to evaluate examples against the content moderation policy of the platform they are working for. 15.1.4. Individuals moderating their own spaces# You can also have people moderate their own spaces. For example: when you text on the phone, you are in charge of blocking numbers if you want to (though the phone company might warn you of potential spam or scams) When you make posts on Facebook or upload videos to YouTube, you can delete comments and replies Also in some of these systems, you can allow friends access to your spaces to let them help you moderate them. 15.1.5. Volunteer Moderation# Letting individuals moderate their own spaces is expecting individuals to put in their own time and labor. You can do the same thing with larger groups and have volunteers moderate them. Reddit does something similar where subreddits are moderated by volunteers, and Wikipedia moderators (and editors) are also volunteers. 15.1.6. Automated Moderators (bots)# Another strategy for content moderation is using bots, that is computer programs that look through posts or other content and try to automatically detect problems. These bots might remove content, or they might flag things for human moderators to review.

      This text provides a relatively complete content review framework, but its discussion method is relatively dull and lacks analysis of the advantages and disadvantages of different review methods, making the reading experience a bit too superficial. If this article can deeply explore the advantages, limitations and applicable scenarios of different methods, it will be more in-depth.

  10. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Spamming. December 2023. Page Version ID: 1187995774. URL: https://en.wikipedia.org/w/index.php?title=Spamming&oldid=1187995774 (visited on 2023-12-08).

      Spam refers to unsolicited large number of duplicate messages, usually used for advertising, fraud (such as phishing) or other improper purposes, and is widely present in platforms such as email, instant messaging, social media and forums. Because the cost of sending spam is extremely low and difficult to track, many criminals use it for profit, but this also brings productivity losses, fraud risks, and additional burdens on Internet service providers. Therefore, many countries have introduced anti-spam information Regulations. The term “Spam” originated from Monty Python’s comedy sketch, originally used in the 1980s to describe the over-sending messages in BBS and chat rooms, and then expanded to Usenet and email. The earliest spam in history can be traced back to a telegram advertisement in 1864, and the starting point of modern internet spam is considered a mass email in 1978, and the 1994 "Green Card Spam" incident further attracted public attention. With the development of the Internet, spam gradually expanded from Usenet to email, and eventually formed a commercial industry. By 2009, most spam worldwide was sent in English and spread to other language markets through automatic translation tools, making its reach even wider.

    1. What Content Gets Moderated# Social media platforms moderate (that is ban, delete, or hide) different kinds of content. There are a number of categories that they might ban things: 14.1.1. Quality Control# In order to make social media sites usable and interesting to users, they may ban different types of content such as advertisements, disinformation, or off-topic posts. Almost all social media sites (even the ones that claim “free speech”) block spam [n1], mass-produced unsolicited messages, generally advertisements, scams, or trolling. Without quality control moderation, the social media site will likely fill up with content that the target users of the site don’t want, and those users will leave. What content is considered “quality” content will vary by site, with 4chan considering a lot of offensive and trolling content to be “quality” but still banning spam (because it would make the site repetitive in a boring way), while most sites would ban some offensive content. 14.1.2. Legal Concerns# Social media sites also might run into legal concerns with allowing some content to be left up on their sites, such as copyrighted material (like movie clips) or child sexual abuse material (CSAM). So most social media sites will often have rules about content moderation, and at least put on the appearance of trying to stop illegal content (though a few will try to move to countries that won’t get them in trouble, like 8kun is getting hosted in Russia). With copyrighted content, the platform YouTube is very aggressive in allowing movie studios to get videos taken down, so many content creators on YouTube have had their videos taken down erroneously [n2]. 14.1.3. Safety# Another concern is for the safety of the users on the social media platform (or at least the users that the platform cares about). Users who don’t feel safe will leave the platform, so social media companies are incentivized to help their users feel safe. So this often means moderation to stop trolling and harassment. 14.1.4. Potentially Offensive# Another category is content that users or advertisers might find offensive. If users see things that offend them too often, they might leave the site, and if advertisers see their ads next to too much offensive content, they might stop paying for ads on the site. So platforms might put limits on language (e.g., racial slurs), violence, sex, and nudity. Sometimes different users or advertisers have different opinions on what should be allowed or not. For example, “The porn ban of 2018 was a defining event for Tumblr that led to a 30 percent drop in traffic and a mass exodus of users that blindsided the company” [n3].

      After reading this text, I feel rational but slightly subjective. It does carry out a systematic analysis around social media content review, but it comes with some potential value judgments and hints that make me somewhat reserved when thinking about its objectivity.

  11. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Anya Kamenetz. Facebook's own data is not as conclusive as you think about teens and mental health. NPR, October 2021. URL: https://www.npr.org/2021/10/06/1043138622/facebook-instagram-teens-mental-health (visited on 2023-12-08). [m2] Anya Kamenetz. Selfies, Filters, and Snapchat Dysmorphia:

      This article explores the impact of social media, especially Instagram, on the mental health of adolescents and related controversies. Facebook whistleblower Frances Haugen testified before Congress that his products hurt teenagers, but researchers noted that Facebook's data is based on subjective surveys, with a small sample size and lacks scientific rigor. Most of the research in the past few decades has not found a direct connection between social media and mental health, and some studies have shown that some teenagers feel better when they are depressed. Experts believe that rather than exaggerating the harm of social media, platform improvements should be promoted, such as providing mental health resources and positive content, future discussions should focus on how to reduce negative impacts while playing the positive role of social media.

    1. Some people view internet-based social media (and other online activities) as inherently toxic and therefore encourage a digital detox [m6], where people take some form of a break from social media platforms and digital devices. While taking a break from parts or all of social media can be good for someone’s mental health (e.g., doomscrolling is making them feel more anxious, or they are currently getting harassed online), viewing internet-based social media as inherently toxic and trying to return to an idyllic time from before the Internet is not a realistic or honest view of the matter. In her essay “The Great Offline,” [m7] Lauren Collee argues that this is just a repeat of earlier views of city living and the “wilderness.” As white Americans were colonizing the American continent, they began idealizing “wilderness” as being uninhabited land (ignoring the Indigenous people who already lived there, or kicking them out or killing them). In the 19th century, as wilderness tourism was taking off as an industry, natural landscapes were figured as an antidote to the social pressures of urban living, offering truth in place of artifice, interiority in place of exteriority, solitude in place of small talk. Similarly, advocates for digital detox build an idealized “offline” separate from the complications of modern life: Sherry Turkle, author of Alone Together, characterizes the offline world as a physical place, a kind of Edenic paradise. “Not too long ago,” she writes, “people walked with their heads up, looking at the water, the sky, the sand” — now, “they often walk with their heads down, typing.” […] Gone are the happy days when families would gather around a weekly televised program like our ancestors around the campfire! But Lauren Collee argues that by placing the blame on the use of technology itself and making not using technology (a digital detox) the solution, we lose our ability to deal with the nuances of how we use technology and how it is designed: I’m no stranger to apps that help me curb my screen time, and I’ll admit I’ve often felt better for using them. But on a more communal level, I suspect that cultures of digital detox — in suggesting that the online world is inherently corrupting and cannot be improved — discourage us from seeking alternative models for what the internet could look like. I don’t want to be trapped in cycles of connection and disconnection, deleting my social media profiles for weeks at a time, feeling calmer but isolated, re-downloading them, feeling worse but connected again. For as long as we keep dumping our hopes into the conceptual pit of “the offline world,” those hopes will cease to exist as forces that might generate change in the worlds we actually live in together. So in this chapter, we will not consider internet-based social media as inherently toxic or beneficial for mental health. We will be looking for more nuance and where things go well, where they do not, and why.

      This post made me rethink the impact of social media and our attitude toward technology. In the past, when people talk about digital detox, most of the views were emphasized on how social media can harm mental health, distract people, and make people anxious and addicted. But this post reminds me that seeing technology itself as a problem and trying to stay away from it completely can be an escape rather than a solution.

  12. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Evolution of cetaceans. November 2023. Page Version ID: 1186568602. URL: https://en.wikipedia.org/w/index.php?title=Evolution_of_cetaceans&oldid=1186568602 (visited on

      The evolution of cetaceans originated about 50 million years ago in the Artiodactyla (a close relative of hippos) and went through an evolutionary process from land to sea. Modern cetaceans are divided into baleen whales (Mysticeti) and toothed whales (Odontoceti), which diverged between 28 million and 33 million years ago, with toothed whales evolving echolocation capabilities and baleen whales gradually adapting to filter feeding. Studies have shown that cetaceans, despite being completely aquatic, retain some of the skeletal characteristics of land mammals.

    1. Biological evolution is how living things change, generation after generation, and how all the different forms of life, from humans to bacteria, came to be. Evolution occurs when three conditions are present: Replication (with Inheritance) An organism can make a new copy of itself, which inherits its characteristics Variations / Mutations The characteristics of an organism are sometimes changed, in a way that can be inherited by future copies Natural Selection Some characteristics make it more or less likely for an organism to compete for resources, survive, and make copies of itself When those three conditions are present, then over time successive generations of organisms will: be more adapted to their environment divide into different groups and diversify stumble upon strategies for competing with or cooperating with other organisms.

      This passage clearly explains the core mechanisms of biological evolution and gives me a more intuitive understanding of how evolution happens. I find it most interesting that the three conditions of evolution – replication, mutation, and natural selection – are deceptively simple, but actually determine the evolution of all life forms. The concept of natural selection, in particular, made me think about how organisms survive by adapting to their environment, such as giraffes having longer necks to better eat the leaves of trees at high places.

  13. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Zack Whittaker. Facebook won't let you opt out of its phone number 'look up' setting. TechCrunch, March 2019. URL: https://techcrunch.com/2019/03/03/facebook-phone-number-look-up/ (visited on 2023-12-07).

      Facebook was found to be using phone numbers provided for two-factor authentication (2FA) to target users with ads, raising privacy concerns. Even if users hide their phone numbers, the default settings still allow others to look up their profiles using those numbers. Security experts criticized Facebook for weakening privacy under the guise of security, warning that phone numbers could be exploited by hackers. In response, Facebook claimed the feature was "not new" but refused to offer an option to completely disable it. As a result, the Irish Data Protection Agency has launched an investigation to understand how Facebook collects and uses phone numbers.

    1. When social media platforms show users a series of posts, updates, friend suggestions, ads, or anything really, they have to use some method of determining which things to show users. The method of determining what is shown to users is called a recommendation algorithm, which is an algorithm (a series of steps or rules, such as in a computer program) that recommends posts for users to see, people for users to follow, ads for users to view, or reminders for users. Some recommendation algorithms can be simple such as reverse chronological order, meaning it shows users the latest posts (like how blogs work, or Twitter’s “See latest tweets” option). They can also be very complicated taking into account many factors, such as: Time since posting (e.g., show newer posts, or remind me of posts that were made 5 years ago today) Whether the post was made or liked by my friends or people I’m following How much this post has been liked, interacted with, or hovered over Which other posts I’ve been liking, interacting with, or hovering over What people connected to me or similar to me have been liking, interacting with, or hovering over What people near you have been liking, interacting with, or hovering over (they can find your approximate location, like your city, from your internet IP address, and they may know even more precisely) This perhaps explains why sometimes when you talk about something out loud it gets recommended to you (because someone around you then searched for it). Or maybe they are actually recording what you are saying and recommending based on that. Phone numbers or email addresses (sometimes collected deceptively [k1]) can be used to suggest friends or contacts. And probably many more factors as well! Now, how these algorithms precisely work is hard to know, because social media sites keep these algorithms secret, probably for multiple reasons: They don’t want another social media site copying their hard work in coming up with an algorithm They don’t want users to see the algorithm and then be able to complain about specific details They don’t want malicious users to see the algorithm and figure out how to best make their content go viral

      The article mentions phenomena such as "recommending relevant content when you say it" or "friends recommending it from a phone number", which makes it feel that the content is closely related to the daily experience of using social media, and readers are easy to resonate with.

  14. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Social model of disability. November 2023. Page Version ID: 1184222120. URL: https://en.wikipedia.org/w/index.php?title=Social_model_of_disability&oldid=1184222120#Social_construction_of_disability (visited on 2023-12-07).

      This text discusses how disability is a socially constructed concept shaped by societal attitudes, power structures, and historical contexts. In the past, disability was tied to moral failure, but during the Enlightenment, it became associated with biology, though still influenced by societal definitions of "health." Disability is often judged based on societal perceptions of productivity and worthiness. The unequal treatment of events like the Paralympics highlights how society undervalues people with disabilities, reinforcing the social construction of disability.

  15. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Many of the disabilities we mentioned above were permanent disabilities, that is, disabilities that won’t go away. But disabilities can also be temporary disabilities, like a broken leg in a cast, which may eventually get better. Disabilities can also vary over time (e.g., “Today is a bad day for my back pain”). Disabilities can even be situational disabilities, like the loss of fine motor skills when wearing thick gloves in the cold, or trying to watch a video on your phone in class with the sound off, or trying to type on a computer while holding a baby.

      was impressed by the author's mention of "situational disability". Everyone may experience the trouble of "disability" in certain situations, such as being unable to operate fine motor skills when wearing thick gloves, or being unable to concentrate in a noisy environment. This reminds me that our physical abilities are dynamic, and everyone will experience "insufficiency" in specific situations, so we need to maintain more empathy and understanding for others' experience of disability.

    1. Right to privacy. November 2023. Page Version ID: 1186826760. URL: https://en.wikipedia.org/w/index.php?title=Right_to_privacy&oldid=1186826760 (visited on 2023-12-05).

      Direct messages (PM/DM) are private messages people send on social media or messaging apps, only visible to those involved. There are different types, like social media DMs (Twitter, Facebook), instant messaging DMs (WhatsApp, Snapchat), and peer-to-peer (P2P) DMs, which let users fully control their data. DMs aren’t just for personal chats—they also help with workplace communication, though they can blur the line between work and personal life. As privacy concerns grow, DMs are becoming more popular across different platforms. Still, some social media platforms may store and even monitor users' private messages.

  16. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. There might be some things that we just feel like aren’t for public sharing (like how most people wear clothes in public, hiding portions of their bodies) We might want to discuss something privately, avoiding embarrassment that might happen if it were shared publicly We might want a conversation or action that happens in one context not to be shared in another (context collapse) We might want to avoid the consequences of something we’ve done (whether ethically good or bad), so we keep the action or our identity private We might have done or said something we want to be forgotten or make at least made less prominent We might want to prevent people from stealing our identities or accounts, so we keep information (like passwords) private We might want to avoid physical danger from a stalker, so we might keep our location private We might not want to be surveilled by a company or government that could use our actions or words against us (whether what we did was ethically good or bad) When we use social media platforms though, we at least partially give up some of our privacy. For example, a social media application might offer us a way of “Private Messaging” [i1] (also called Direct Messaging) with another user. But in most cases those “private” messages are stored in the computers at those companies, and the company might have computer programs that automatically search through the messages, and people with the right permissions might be able to view them directly. In some cases we might want a social media company to be able to see our “private” messages, such as if someone was sending us death threats. We might want to report that user to the social media company for a ban, or to law enforcement (though many people have found law enforcement to be not helpful), and we want to open access to those “private” messages to prove that they were sent.

      The article highlights the reality that social media’s “private messages are not completely private”, which is an important but often overlooked fact for ordinary users. I always thought that “private messages” were safe, but in fact this information is still stored on the servers of social media companies and may be reviewed by algorithms or humans.

  17. Jan 2025
  18. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Web tracking. October 2023. Page Version ID: 1181294364. URL: https://en.wikipedia.org/w/index.php?title=Web_tracking&oldid=1181294364 (visited on 2023-12-05).

      The source is about what web tracking is and what it is about

    1. Social media platforms collect various types of data on their users. Some data is directly provided to the platform by the users. Platforms may ask users for information like: email address name profile picture interests friends Platforms also collect information on how users interact with the site. They might collect information like (they don’t necessarily collect all this, but they might): when users are logged on and logged off who users interact with What users click on what posts users pause over where users are located what users send in direct messages to each other Online advertisers can see what pages their ads are being requested on, and track users [h1] across those sites. So, if an advertiser sees their ad is being displayed on an Amazon page for shoes, then the advertiser can start showing shoe ads to that same user when they go to another website. Additionally, social media might collect information about non-users, such as when a user posts a picture of themselves with a friend who doesn’t have an account, or a user shares their phone contact list with a social media site, some of whom don’t have accounts (Facebook does this [h2]). Social media platforms then use “data mining” to search through all this data to try to learn more about their users, find patterns of behavior, and in the end, make more money.

      I feel like the collection and use of user data by social media platforms is a double-edged sword. Although it can provide personalized services and improve user experience, it also brings challenges in terms of privacy, security, and social impact. In order to deal with these problems, more transparent policies and stronger user data protection measures are needed. At the same time, users should also raise their awareness of data privacy and take the initiative to protect their information security.

  19. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. lonelygirl15. November 2023. Page Version ID: 1186146298. URL: https://en.wikipedia.org/w/index.php?title=Lonelygirl15&oldid=1186146298 (visited on 2023-11-

      The source is about Lonelygirl15. It was an American science fiction thriller web series. The series followed the life of Bree Avery, a 16-year-old homeschooled girl whose seemingly ordinary vlogs revealed a darker story involving her parents’ mysterious cult, The Order. Initially presented as real vlogs, the series quickly gained popularity, becoming YouTube’s most-subscribed channel.

      As viewers questioned its authenticity, investigations revealed the series was fictional, created by a team that scripted Bree’s story and managed her online presence. Despite being exposed as a hoax in September 2006, the series continued to grow in popularity, peaking in 2007.

  20. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. These needs may not always be as obvious in highly individualized societies, like Post-Enlightenment Europe and the United States. The possibility for self-reliance has been created in part by making certain things dependable and institutionalized. You can go get yourself food without feeling like you have to trust anyone because you can just go to the store (which has to adhere to corporate legal requirements) and buy food (the supply of which is made stable by complex networks of growing, manufacturing, and transportation, covered by the assurances of FDA-compliant labeling) from people who work there (and are subject to labor laws and HR regulations, which, if they are not followed, means the staff person does not get paid, so their wellbeing depends on them doing their job). The need to trust other people is obscured by the many institutions that we have created. Institutions have ways, sometimes, of getting around human whims and surprises. But at the end of the day, it is still hugely important to us that we feel clear about who can be trusted, and for what.

      This passage is a logically clear and vivid discussion, which successfully explains the deep human need for authenticity and its social significance. I think this example is very good. Everyone has the experience of going to the supermarket to buy things. This very daily example made me understand this argument at once and agree with it.

  21. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Assassination of Martin Luther King Jr. November 2023. Page Version ID: 1186577416. URL: https://en.wikipedia.org/w/index.php?title=Assassination_of_Martin_Luther_King_Jr.&oldid=1186577416#Alleged_government_involvement (visited on 2023-12-05).

      This passage discusses allegations of conspiracy surrounding the assassination of Martin Luther King Jr. Loyd Jowers, a businessman near the assassination site, claimed in 1993 that he conspired with the mafia and government to kill King, asserting that James Earl Ray was a scapegoat. Jowers’ claims were inconsistent and deemed not credible by the Department of Justice.

    1. Disruption and provoking reaction# Trolling is when an Internet user posts inauthentically (often false, upsetting, or strange) with the goal of causing disruption or provoking an emotional reaction. When the goal is provoking an emotional reaction, it is often for a negative emotion, such as anger or emotional pain. When the goal is disruption, it might be attempting to derail a conversation (e.g., concern trolling [g4]), or make a space no longer useful for its original purpose (e.g., joke product reviews), or try to get people to take absurd fake stories seriously [g5].

      I think this content is well defined and easy for me to understand. The opening paragraph clearly defines the behavioral characteristics of trolls (such as posting false, disturbing or ridiculous content) and its main goal (to trigger an emotional reaction or create chaos).

  22. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Mark R. Cheathem. Conspiracy Theories Abounded in 19th-Century American Politics. URL: https://www.smithsonianmag.com/history/conspiracy-theories-abounded-19th-century-american-politics-180971940/ (visited on 2023-11-24).

      This source examines the role of conspiracy theories in 19th-century American politics, highlighting their use as a tool to influence elections and sway public opinion. Beginning with the 1824 election and the “corrupt bargain” accusations against John Quincy Adams and Henry Clay, conspiracy rhetoric became a recurring feature of political campaigns. In 1828, both sides accused each other of plotting coups, while the Anti-Masonic Party in the 1830s accused Freemasons of corruption, itself becoming a target of conspiracies in the 1832 election.

      The “Bank War” during Jackson’s presidency saw further accusations of financial manipulation and elite conspiracies. By 1836, conspiracy theories targeted Martin Van Buren and Richard M. Johnson, with baseless claims of religious and racial plots. The passage argues that such rhetoric, while often unfounded, undermined trust in democracy, increased voter cynicism, and allowed elites to consolidate power.

    2. Mark R. Cheathem. Conspiracy Theories Abounded in 19th-Century American Politics. URL: https://www.smithsonianmag.com/history/conspiracy-theories-abounded-19th-century-american-politics-180971940/ (visited on 2023-11-24).

      This source explores the influence of conspiracy theories in politics in American history, especially in the 1820s and 1930s. At that time, American political parties such as the Democratic Party, the National Republican Party, the Anti-Masonic Party, and the Whig Party often used conspiracy theories as political tools to attract voter support. For example, after the 1824 presidential election, Andrew Jackson's supporters accused John Quincy Adams and Henry Clay of a "corrupt deal" and believed that they conspired to manipulate the election results. During the 1832 election, the Anti-Masonic Party accused Masonic organizations of secretly manipulating politics, reflecting the distrust of the elite at the time. The spread of these conspiracy theories has exacerbated public distrust of the government and weakened the credibility of the democratic system. This shows that conspiracy theories have existed in American political history for a long time and have had a negative impact on social trust and political stability.

    1. The book Writing on the Wall: Social Media - The First 2,000 Years [e1] by Tom Standage outlines some of the history of social media before internet-based social media platforms such as in times before the printing press: Graffiti and other notes left on walls were used for sharing updates, spreading rumors, and tracking accounts Books and news write-ups had to be copied by hand, so that only the most desired books went “viral” and spread Later, sometime after the printing press, Stondage highlights how there was an unusual period in American history that roughly took up the 1900s where, in America, news sources were centralized in certain newspapers and then the big 3 TV networks. In this period of time, these sources were roughly in agreement and broadcast news out to the country, making a more unified, consistent news environment (though, of course, we can point out how they were biased in ways like being almost exclusively white men). Before this centralization of media in the 1900s, newspapers and pamphlets were full of rumors and conspiracy theories [e2]. And now as the internet and social media have taken off in the early 2000s, we are again in a world full of rumors and conspiracy theories.

      This passage starts with history, explaining why the current social media environment is full of rumors and conspiracy theories, and points out that this phenomenon is not new, but a "reincarnation" of history. Although it mentions the problem of rumors and conspiracy theories in modern social media, it does not go into detail about why these phenomena are more likely to occur in a decentralized environment (such as algorithmic recommendations, information cocoons and other technical factors).

    2. The book Writing on the Wall: Social Media - The First 2,000 Years [e1] by Tom Standage outlines some of the history of social media before internet-based social media platforms such as in times before the printing press: Graffiti and other notes left on walls were used for sharing updates, spreading rumors, and tracking accounts Books and news write-ups had to be copied by hand, so that only the most desired books went “viral” and spread Later, sometime after the printing press, Stondage highlights how there was an unusual period in American history that roughly took up the 1900s where, in America, news sources were centralized in certain newspapers and then the big 3 TV networks. In this period of time, these sources were roughly in agreement and broadcast news out to the country, making a more unified, consistent news environment (though, of course, we can point out how they were biased in ways like being almost exclusively white men). Before this centralization of media in the 1900s, newspapers and pamphlets were full of rumors and conspiracy theories [e2]. And now as the internet and social media have taken off in the early 2000s, we are again in a world full of rumors and conspiracy theories.

      I felt that social media is not a unique product of modern society, but a continuation and evolution of the way humans communicate. From the letter exchanges in ancient Rome to today's digital platforms, humans have always been seeking efficient means of communication. This made me realize that understanding the history of social media can help us look at today's information dissemination phenomenon more comprehensively and think about how to communicate more effectively in modern society.

  23. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. witter. November 2023. Page Version ID: 1187856185. URL: https://en.wikipedia.org/wiki/Twitter (visited on 2023-12-01).

      Given these developments, how might the rebranding and policy changes influence user trust and engagement on the platform?

  24. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Steven Tweedie. This disturbing image of a Chinese worker with close to 100 iPhones reveals how App Store rankings can be manipulated. February 2015. URL:

      This phenomenon is actually quite common, no matter the country or industry. However, I didn't expect ranking manipulation to be so widespread in Apple's company, as I’ve always regarded Apple as a reputable company. I also feel that the working conditions of the worker in the photo deserve attention—it seems her job may not necessarily meet labor rights standards.

    1. Bots, on the other hand, will do actions through social media accounts and can appear to be like any other user. The bot might be the only thing posting to the account, or human users might sometimes use a bot to post for them. Note that sometimes people use “bots” to mean inauthentically run accounts, such as those run by actual humans, but are paid to post things like advertisements or political content. We will not consider those to be bots, since they aren’t run by a computer. Though we might consider these to be run by “human computers” who are following the instructions given to them, such as in a click farm:

      Although this article clearly distinguishes between "bots" controlled by programs and "fake accounts" operated by humans, in practice, the two are often mixed together. For example, an account can automatically generate content by a program, but be monitored and adjusted by humans. How should this situation be defined? Is a more detailed classification needed?

    1. In my example, we use two translators: The English-French speaker and the French-Arabic speaker. Then in order for me to communicate with the Arabic speaker, we pass our message to the translators, and they communicate with each other using French as an intermediate language. So messages will be translated from English to French and then to Arabic one way, and from Arabic to French to English the other.

      I find the analogy between human language translation and computer programming to be quite intriguing. This comparison is also very apt, as it clearly explains how computers operate. It has enabled me to quickly grasp how information is transformed between different forms.

    1. Being and becoming an exemplary person (e.g., benevolent; sincere; honoring and sacrificing to ancestors; respectful to parents, elders and authorities, taking care of children and the young; generous to family and others). These traits are often performed and achieved through ceremonies and rituals (including sacrificing to ancestors, music, and tea drinking), resulting in a harmonious society.

      Confucianism and Taoism differ significantly. Confucianism emphasizes structured social roles and moral education to cultivate virtue and maintain social harmony. In contrast, Taoism advocates for spontaneity, suggesting that forced actions may lead to unintended consequences. Both philosophies present compelling arguments. I am curious whether contemporary society leans more towards Confucian or Taoist principles. Would integrating both philosophies result in better or worse societal outcomes?