- Last 7 days
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
It isn’t clear what should be considered as “nature” in a social media environment (human nature? the nature of the design of the social media platform? are bots unnatural?), so we’ll just instead talk about selection. When content (and modified copies of content) is in a position to be replicated, there are factors that determine whether it gets selected for replicated or not. As humans look at the content they see on social media they decide whether they want to replicate it for some reason, such as: “that’s funny, so I’ll retweet it” “that’s horrible, so I’ll respond with an angry face emoji” “reposting this will make me look smart” “I am inspired to use part of this to make a different thing” Groups and organizations make their own decisions on what social media content to replicate as well (e.g., a news organization might find a social media post newsworthy, so they write articles about it). Additionally, content may be replicated because of: Paid promotion and ads, where someone pays money to have their content replicated Astroturfing: where crowds, often of bots, are paid to replicate social media content (e.g., like, retweet) Finally, social media platforms use algorithms and design layouts which determine what posts people see. There are various rules and designs social media sites can use, and they can amplify natural selection and unnatural selection in various ways. They can do this through recommendation algorithms as we saw last chapter, as well as choosing what actions are allowed and what amount of friction is given to those actions, as well as what data is collected and displayed. Different designs of social media platforms will have different consequences in what content has viral, just like how different physical environments d
This passage examines the concept of selection in a social media environment, highlighting how various factors influence whether content gets replicated. It explores human motivations like humor, outrage, self-presentation, or inspiration, alongside decisions made by groups or organizations, such as news outlets amplifying posts deemed newsworthy. Additionally, it addresses artificial influences like paid promotions and astroturfing, where bots are used to artificially replicate content.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Knowing that there is a recommendation algorithm, users of the platform will try to do things to make the recommendation algorithm amplify their content. This is particularly important for people who make their money from social media content. For example, in the case of the simple “show latest posts” algorithm, the best way to get your content seen is to constantly post and repost your content (though if you annoy users too much, it might backfire). Other strategies include things like: Clickbait: trying to give you a mystery you have to click to find the answer to (e.g., “You won’t believe what happened when this person tried to eat a stapler!”). They do this to boost clicks on their link, which they hope boosts them in the recommendation algorithm, and gets their ads more views Trolling: by provoking reactions, they hope to boost their content more Coordinated actions: have many accounts (possibly including bots) like a post, or many people use a hashtag, or have people trade positive reviews
This passage discusses strategies used by social media users to game recommendation algorithms, such as clickbait, provocative content, or coordinated actions to boost visibility, which are especially impactful for creators relying on social media income. It also mentions YouTuber F.D. Signifier exploring YouTube’s algorithm and interviewing other creators, particularly Black creators, to shed light on these practices. While these strategies may generate short-term engagement, they also carry risks of backlash.
-
- Nov 2024
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
When we think about repair and reconciliation, many of us might wonder where there are limits. Are there wounds too big to be repaired? Are there evils too great to be forgiven? Is anyone ever totally beyond the pale of possible reconciliation? Is there a point of no return? One way to approach questions of this kind is to start from limit cases. That is, go to the farthest limit and see what we find there by way of a template, then work our way back toward the everyday. Let’s look at two contrasting limit cases: one where philosophers and cultural leaders declared that repairs were possible even after extreme wrongdoing, and one where the wrongdoers were declared unforgivable.1
This passage raises profound questions about the boundaries of repair and reconciliation, challenging us to consider whether certain actions or individuals can ever be beyond forgiveness. By proposing an exploration of limit cases—extreme examples where repair was deemed either possible or impossible—it offers a thought-provoking framework for examining the complexities of morality and reconciliation. Such an approach encourages reflection on how these extraordinary cases might inform our understanding of forgiveness and repair in more everyday contexts.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
In this view, a good parent might see their child doing something bad or dangerous, and tell them to stop. The child may feel shame (they might not be developmentally able to separate their identity from the momentary rejection). The parent may then comfort the child to let the child know that they are not being rejected as a person, it was just their action that was a problem. The child’s relationship with the parent is repaired, and over time the child will learn to feel guilt instead of shame and seek to repair harm instead of hide.
This description highlights the delicate balance parents must strike between correcting a child's behavior and protecting their emotional well-being. By addressing the problematic action first and then offering emotional reassurance, parents help children gradually understand the separation between their mistakes and their identity, which is crucial for healthy emotional development. This approach not only repairs the parent-child relationship but also fosters a more mature response to mistakes in the future, such as taking responsibility and making amends.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
In much of mainstream Western thought, the individual’s right to freedom is taken as a supreme moral good, and so anything that is viewed as an illegitimate interference with that individual freedom is considered violence or violation. In the founding of the United States, one thing on people’s minds was the way that in a Britain riddled with factions and disagreement, people of one subgroup could not speak freely when another subgroup was in power. This case was unusual because instead of one group being consistently dominant, the Catholic and Protestant communities alternated between being dominant and being oppressed, based on who was king or queen. So the United States wanted to reinforce what they saw as the value of individual freedoms by writing it into the formal, explicit part of our social contract. Thus, we got the famous First Amendment to the Constitution, saying that individuals’ right to freely express themselves in speech, in their religion, in their gatherings, and so on could not legally be interfered with.
This passage underscores the deep-rooted emphasis on individual freedom in Western thought, particularly in the founding principles of the United States. It highlights how historical experiences of alternating oppression in Britain shaped the desire to protect freedoms explicitly through the First Amendment. By doing so, the U.S. aimed to create a system where the rights to speech, religion, and assembly were safeguarded from the shifting dynamics of power and oppression.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Bullying: like sending mean messages through DMs Cyberstalking: Continually finding the account of someone, and creating new accounts to continue following them. Or possibly researching the person’s physical location. Hacking: Hacking into an account or device to discover secrets, or make threats. Tracking: An abuser might track the social media use of their partner or child to prevent them from making outside friends. They may even install spy software on their victim’s phone. Death threats / rape threats Etc.
These examples highlight the alarming ways digital platforms can be misused to harm others, whether through direct actions like sending mean messages or more invasive behaviors like cyberstalking and hacking. The use of technology to track or control someone's actions, such as monitoring social media or installing spyware, reflects a disturbing misuse of power and trust. Additionally, the severity of threats like death or rape underscores the urgent need for stronger safeguards and support systems to protect individuals from such harmful behaviors.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
When social media users work together, we can consider what problem they are solving. For example, for some of the Tiktok Duet videos from the virality chapter, the “problem” would be something like “how do we create music out of this source video” and the different musicians contribute their own piece to the solution. For some other examples: In the case of a missing hiker rescued after Twitter user tracks him down using his last-sent photo, the “problem” was “Where did the hiker disappear?” and the crowd investigated whatever they could to find the solution of the hiker’s location. In the case of Canucks’ staffer uses social media to find fan who saved his life, the “problem” was “Who is the fan who saved the Canucks’ staffer’s life?” and the solution was basically to try to identify and dox the fan (though hopefully in a positive way). In the case of Twitter tracks down mystery couple in viral proposal photos, the problem was “Who is the couple in the photo?” and the solution was again to basically dox them, though in the article they seemed ok with it.
On social media, user collaboration can be seen as a process of solving a specific “problem.” For instance, in finding a missing hiker, the problem was “Where did the hiker disappear?” and users investigated to locate him; in identifying the fan who saved a Canucks staffer, the problem was “Who is the life-saving fan?” and users worked to uncover the fan’s identity; and in finding the mystery couple from viral proposal photos, the problem was “Who is this couple?” with users teaming up to solve the mystery. These cases show how social media users collectively tackle specific issues.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
There have been many efforts to use computers to replicate the experience of communicating with someone in person, through things like video chats, or even telepresence robots]. But there are ways that attempts to recreate in-person interactions inevitably fall short and don’t feel the same. Instead though, we can look at different characteristics that computer systems can provide, and find places where computer-based communication works better, and is Beyond Being There (pdf here). Some of the different characteristics that means of communication can have include (but are not limited to): Location: Some forms of communication require you to be physically close, some allow you to be located anywhere with an internet signal. Time delay: Some forms of communication are almost instantaneous, some have small delays (you might see this on a video chat system), or have significant delays (like shipping a package). Synchronicity: Some forms of communication require both participants to communicate at the same time (e.g., video chat), while others allow the person to respond when convenient (like a mailed physical letter). Archiving: Some forms of communication automatically produce an archive of the communication (like a chat message history), while others do not (like an in-person conversation) Anonymity: Some forms of communication make anonymity nearly impossible (like an in-person conversation), while others make it easy to remain anonymous. -Audience: Communication could be private or public, and they could be one-way (no ability to reply), or two+-way where others can respond.
While video chats and telepresence robots try to mimic in-person communication, they often lack its authentic feel. However, computer-based communication has unique strengths, like removing location constraints, offering flexible timing, and providing options for anonymity and archiving. These features enable a communication experience that goes “beyond being there.”
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
15.1.1. No Moderators# Some systems have no moderators. For example, a personal website that can only be edited by the owner of the website doesn’t need any moderator set up (besides the person who makes their website). If a website does let others contribute in some way, and is small, no one may be checking and moderating it. But as soon as the wrong people (or spam bots) discover it, it can get flooded with spam, or have illegal content put up (which could put the owner of the site in legal jeopardy).
In systems without moderators, such as personal websites only editable by their owners, moderation isn't typically necessary. However, if the website allows contributions and remains unchecked, it becomes vulnerable to abuse, especially if it attracts spammers or malicious users. Without proper oversight, such content can lead to significant issues, including the risk of spam overload or the posting of illegal materials, which could place the site owner in legal danger.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Reddit is composed of many smaller discussion boards, called subreddits. These subreddits range from friendly to very toxic, with different moderators in charge of each subreddit. Reddit as a larger platform decided to ban and remove some of its most toxic and hateful subreddits, including r/c***town (note: I censored out a racial slur for Black people), and r/fatpeoplehate. In a study of what happened after this ban:
Reddit, as a larger platform, decided to ban and remove some of its most toxic and hateful subreddits, including r/c***town (note: censored a racial slur for Black people) and r/fatpeoplehate. A study of the effects of this ban found that some users ceased participating altogether, while others changed their behavior to become less extreme. This suggests that removing harmful content from the platform can effectively reduce certain types of toxic behavior and promote a more positive community atmosphere.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
We want to provide you, the reader, a chance to explore mental health more. We want you to be considering potential benefits and harms to the mental health of different people (benefits like reducing stress, feeling part of a community, finding purpose, etc. and harms like unnecessary anxiety or depression, opportunities and encouragement of self-bullying, etc.). As you do this you might consider personality differences (such as introverts and extroverts), and neurodiversity, the ways people’s brains work and process information differently (e.g., ADHD, Autism, Dyslexia, Face blindness, depression, anxiety). But be careful generalizing about different neurotypes (such as Autism), especially if you don’t know them well. Instead try to focus on specific traits (that may or may not be part of a specific group) and the impacts on them (e.g., someone easily distracted by motion might…., or someone sensitive to loud sounds might…, or someone already feeling anxious might…). We will be doing a modified
Social media can have both positive and negative effects on mental health, with outcomes often influenced by individual traits like personality and sensitivity to certain stimuli. For some, it can reduce stress, foster a sense of community, and provide a space to find purpose, while for others, it may heighten feelings of anxiety, self-comparison, or even self-bullying. It’s crucial to remember that everyone’s experience is unique, so understanding these impacts requires careful thought about specific needs and sensitivities rather than broad generalizations.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
In 2019 the company Facebook (now called Meta) presented an internal study that found that Instagram was bad for the mental health of teenage girls, and yet they still allowed teenage girls to use Instagram. So, what does social media do to the mental health of teenage girls, and to all its other users? The answer is of course complicated and varies. Some have argued that Facebook’s own data is not as conclusive as you think about teens and mental health.
The impact of social media on teenage girls' mental health is complex and layered. While Facebook's internal study suggested Instagram might negatively affect some teenage girls, especially around issues like self-esteem and body image, the data isn't entirely clear-cut or universally agreed upon. Different studies show varied effects, with some suggesting that social media can foster connections and support, while others highlight risks like comparison and anxiety, showing that the relationship between social media and mental health isn’t one-size-fits-all.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
The online community activity of copying and remixing can be a means of cultural appropriation, which is when one cultural group adopts something from another culture in an unfair or disrespectful way (as opposed to a fair, respectful cultural exchange). For example, many phrases from Black American culture have been appropriated by white Americans and had their meanings changed or altered (like “woke”, “cancel”, “shade”, “sip/spill the tea”, etc.). Additionally, white Americans often use images and gifs of Black people reacting and expressing emotions. This modern practice with gifs has been compared to the earlier (and racist) art forms of blackface, where white actors would paint their faces black and then act in exaggerated unintelligent ways.
Copying and remixing online can lead to cultural appropriation when one culture adopts elements of another disrespectfully. For instance, Black American phrases are often appropriated and lose their original meanings. Similarly, non-Black people using Black reaction gifs has been criticized as a digital form of blackface, reflecting harmful stereotypes.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
In the 1976 book The Selfish Gene, evolutionary biologist Richard Dawkins1 said rather than looking at the evolution of organisms, it made even more sense to look at the evolution of the genes of those organisms (sections of DNA that perform some functions and are inherited). For example, if a bee protects its nest by stinging an attacking animal and dying, then it can’t reproduce and it might look like a failure of evolution. But if the gene that told the bee to die protecting the nest was shared by the other bees in the nest, then that one bee dying allows the gene to keep being replicated, so the gene is successful evolutionarily. Since genes contained information about how organisms would grow and live, then biological evolution could be considered to be evolving information. Dawkins then took this idea of the evolution of information and applied it to culture, coining the term “meme” (intended to sound like “gene”).
Dawkins suggests that evolution should focus on gene survival rather than individual survival. He proposes that genes evolve as information, explaining the logic behind altruistic behaviors. The concept of “meme” in cultural evolution shows that ideas can spread and evolve like genes.
-
- Oct 2024
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Knowing that there is a recommendation algorithm, users of the platform will try to do things to make the recommendation algorithm amplify their content. This is particularly important for people who make their money from social media content. For example, in the case of the simple “show latest posts” algorithm, the best way to get your content seen is to constantly post and repost your content (though if you annoy users too much, it might backfire).
This paragraph effectively highlights how users attempt to leverage recommendation algorithms to amplify their content, which is especially important for individuals who make a living from social media. The example of the "show latest posts" algorithm illustrates that frequently posting and reposting content can be a simple way to increase visibility, though it also carries the risk of annoying followers. This strategy reflects the tactical thinking of social media users in trying to balance maximizing content reach while avoiding negative backlash from overexposure.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Individual analysis focuses on the behavior, bias, and responsibility an individual has, while systemic analysis focuses on the how organizations and rules may have their own behaviors, biases, and responsibility that aren’t necessarily connected to what any individual inside intends. For example, there were differences in US criminal sentencing guidelines between crack cocaine vs. powder cocaine in the 90s. The guidelines suggested harsher sentences on the version of cocaine more commonly used by Black people, and lighter sentences on the version of cocaine more commonly used by white people. Therefore, when these guidelines were followed, they had have racially biased (that is, racist) outcomes regardless of intent or bias of the individual judges. (See: https://en.wikipedia.org/wiki/Fair_Sentencing_Act).
This passage highlights how systemic analysis can reveal biases and inequalities embedded in organizational rules, independent of individual intentions. The example of 1990s U.S. sentencing guidelines for crack vs. powder cocaine demonstrates how policies can produce racially biased outcomes, disproportionately impacting Black people. It underscores the importance of examining systems critically, as structural biases can lead to injustice even without individual prejudice.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
A disability is an ability that a person doesn’t have, but that their society expects them to have.1 For example: If a building only has staircases to get up to the second floor (it was built assuming everyone could walk up stairs), then someone who cannot get up stairs has a disability in that situation. If a physical picture book was made with the assumption that people would be able to see the pictures, then someone who cannot see has a disability in that situation. If tall grocery store shelves were made with the assumption that people would be able to reach them, then people who are short, or who can’t lift their arms up, or who can’t stand up, all would have a disability in that situation. If an airplane seat was designed with little leg room, assuming people’s legs wouldn’t be too long, then someone who is very tall, or who has difficulty bending their legs would have a disability in that situation.
Disabilities often arise from societal assumptions about what people are capable of, rather than from inherent limitations, highlighting the importance of inclusive design. Many environments and products are created with a “one-size-fits-all” approach, unintentionally excluding those who don't fit the standard expectations. By designing spaces and objects that accommodate diverse needs, society can reduce the limitations that people with disabilities face.
-
Some disabilities are visible disabilities that other people can notice by observing the disabled person (e.g., wearing glasses is an indication of a visual disability, or a missing limb might be noticeable). Other disabilities are invisible disabilities that other people cannot notice by observing the disabled person (e.g., chronic fatigue syndrome, contact lenses for a visual disability, or a prosthetic for a missing limb covered by clothing). Sometimes people with invisible disabilities get unfairly accused of “faking” or “making up” their disability (e.g., someone who can walk short distances but needs to use a wheelchair when going long distances).
Invisible disabilities are often misunderstood, leading people who have them to face accusations of "faking" or "pretending," which is unfair and hurtful. While some disabilities are visibly noticeable, others are hidden beneath the surface; society needs greater awareness and empathy to truly support these individuals. When assessing others' health or abilities, we should remember that appearances don't tell the whole story, and we should avoid making quick judgments.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
While we have our concerns about the privacy of our information, we often share it with social media platforms under the understanding that they will hold that information securely. But social media companies often fail at keeping our information secure. For example, the proper security practice for storing user passwords is to use a special individual encryption process for each individual password. This way the database can only confirm that a password was the right one, but it can’t independently look up what the password is or even tell if two people used the same password. Therefore if someone had access to the database, the only way to figure out the right password is to use “brute force,” that is, keep guessing passwords until they guess the right one (and each guess takes a lot of time). But while that is the proper security for storing passwords. So for example, Facebook stored millions of Instagram passwords in plain text, meaning the passwords weren’t encrypted and anyone with access to the database could simply read everyone’s passwords. And Adobe encrypted their passwords improperly and then hackers leaked their password database of 153 million users.
This passage emphasizes the expectations we have of social media platforms to keep our information secure and highlights notable failures in doing so. It underscores how improper security practices, such as storing passwords in plain text or using weak encryption, leave users vulnerable to data breaches. The examples of Facebook and Adobe demonstrate the serious consequences of these lapses, reminding us of the critical importance of robust security measures for protecting user data.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
There are many reasons, both good and bad, that we might want to keep information private. There might be some things that we just feel like aren’t for public sharing (like how most people wear clothes in public, hiding portions of their bodies) We might want to discuss something privately, avoiding embarrassment that might happen if it were shared publicly We might want a conversation or action that happens in one context not to be shared in another (context collapse) We might want to avoid the consequences of something we’ve done (whether ethically good or bad), so we keep the action or our identity private We might have done or said something we want to be forgotten or make at least made less prominent We might want to prevent people from stealing our identities or accounts, so we keep information (like passwords) private We might want to avoid physical danger from a stalker, so we might keep our location private We might not want to be surveilled by a company or government that could use our actions or words against us (whether what we did was ethically good or bad) When we use social media platforms though, we at least partially give up some of our privacy. For example, a social media application might offer us a way of “Private Messaging” (also called Direct Messaging) with another user. But in most cases those “private” messages are stored in the computers at those companies, and the company might have computer programs that automatically search through the messages, and people with the right permissions might be able to view them directly.
This passage effectively highlights the diverse reasons why people value privacy, ranging from maintaining dignity to protecting themselves from harm. It also raises important concerns about the illusion of privacy on social media, where supposedly private communications are still accessible to companies. The contrast between the need for privacy and the reality of online platforms prompts a critical discussion on how much control we really have over our personal information.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Datasets can be poisoned unintentionally. For example, many scientists posted online surveys that people can get paid to take. Getting useful results depended on a wide range of people taking them. But when one TikToker’s video about taking them went viral, the surveys got filled out with mostly one narrow demographic, preventing many of the datasets from being used as intended. See more in
This passage illustrates how datasets can be unintentionally skewed, compromising the integrity of data collection. The viral spread of information on platforms like TikTok can lead to overrepresentation of a specific demographic, limiting the usefulness of the surveys. It highlights the challenges of maintaining diverse and balanced datasets, especially in an open, online environment.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Social media sites then make their money by selling targeted advertising, meaning selling ads to specific groups of people with specific interests. So, for example, if you are selling spider stuffed animal toys, most people might not be interested, but if you could find the people who want those toys and only show your ads to them, your advertising campaign might be successful, and those users might be happy to find out about your stuffed animal toys. But targeting advertising can be used in less ethical ways, such as targeting gambling ads at children, or at users who are addicted to gambling, or the 2016 Trump campaign ‘target[ing] 3.5m black Americans to deter them from voting’
This passage highlights how social media platforms monetize by targeting advertisements to specific audiences, which can be beneficial when promoting niche products. However, it also raises concerns about the ethical implications of targeting vulnerable groups or manipulating users for political gain, as seen in the 2016 Trump campaign. This dual nature of targeted advertising calls for a balance between business interests and ethical responsibility.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Film Crit Hulk goes on to say that the “don’t feed the trolls” advice puts the burden on victims of abuse to stop being abused, giving all the power to trolls. Instead, Film Crit Hulk suggests giving power to the victims and using “skilled moderation and the willingness to kick people off platforms for violating rules about abuse”
This passage argues that “don’t feed the trolls” puts the burden on victims, giving trolls too much power. Instead, Film Crit Hulk advocates for empowering victims through strong moderation and removing abusers from platforms. It highlights the importance of holding platforms accountable for creating safe online spaces.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Trolling is when an Internet user posts inauthentically (often false, upsetting, or strange) with the goal of causing disruption or provoking an emotional reaction. When the goal is provoking an emotional reaction, it is often for a negative emotion, such as anger or emotional pain. When the goal is disruption, it might be attempting to derail a conversation (e.g., concern trolling), or make a space no longer useful for its original purpose (e.g., joke product reviews), or try to get people to take absurd fake stories seriously.
This passage defines trolling by emphasizing its two main goals: provoking negative emotions and causing disruption. Trolling often involves insincere behavior, like spreading false information or derailing conversations. It negatively affects both individual emotions and the quality of online spaces. Trolling also reflects the challenges of online anonymity and the openness of social media.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
This trend brought complicated issues of authenticity because presumably there was some human employee that got charged with running the company’s social media account. We are simultaneously aware that, on the one hand, that human employee may be expressing themselves authentically (whether playfully or about serious issues), but also that human is at the mercy of the corporation and the corporation can at any moment tell that human to stop or replace that human with another.
This paragraph effectively captures the tension between authenticity and corporate control in social media management. It highlights the duality where an employee may express genuine thoughts but remains bound to the corporation's directives, making their authenticity conditional and fragile.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
On social media, context collapse is a common concern, since on a social networking site you might be connected to very different people (family, different groups of friends, co-workers, etc.). Additionally, something that was shared within one context (like a private message), might get reposted in another context (publicly posted elsewhere).
the paragraph introduces an important and complex issue concisely. Adding a few more details about its effects on behavior, specific examples, and emotional impact could make it even more insightful.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
When computers store numbers, there are limits to how much space is can be used to save each number. This limits how big (or small) the numbers can be, and causes rounding with floating-point numbers. Additionally, programming languages might include other ways of storing numbers, such as fractions, complex numbers, or limited number sets (like only positive integers).
When computers store numbers, they are limited by memory space, which may lead to problems such as floating-point precision errors and integer overflows. Different programming languages provide special data types such as fractions and complex numbers to solve these limitations.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
The 1980s and 1990s also saw an emergence of more instant forms of communication with chat applications. Internet Relay Chat (IRC) lets people create “rooms” for different topics, and people could join those rooms and participate in real-time text conversations with the others in the room.
IRC was influential in the early development of online communities, offering a decentralized, flexible, and open environment for communication, which contributed to its popularity in the 1990s. Although it's less popular now, IRC still has a dedicated user base, and its influence can be seen in modern chat tools like Slack, Discord, and others.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
As you can see in the apple example, any time we turn something into data, we are making a simplification.1 If we are counting the number of something, like apples, we are deciding that each one is equivalent. If we are writing down what someone said, we are losing their tone of voice, accent, etc. If we are taking a photograph, it is only from one perspective, etc. Different simplifications are useful for different tasks. Any given simplification will be helpful for some tasks and be unhelpful for others. See also, this saying in statistics: All models are wrong, but some are useful
The article's apple example ignores the variations in each apple, such as size, color, and quality, by simply counting the quantity of apples. Similar to this, when you record a conversation, the emotional details like tone and intonation are lost even though the text material is recorded. Moreover, taking a picture can only depict a portion of the scene; it cannot depict the entire scene. Every simplification technique has its limitations, but the effectiveness of each technique is determined by how well it can deliver relevant information for a given task in a given situation.
-