34 Matching Annotations
  1. Mar 2023
    1. As a member of society, we hope you are informed about the role social media plays in shaping society, such as how design decisions and bots can influence social movements (polarizing, spreading, or stifling different them), and the different economic, social, and governmental pressures that social media platforms operate under. We hope you are then able to advocate for ways of improving how social media operates in society. That might be through voting, or pressuring government officials, or spreading ideas and information, or organizing coordinated actions or protests.

      Social media plays a significant role in shaping society by influencing public opinion, driving social movements, and facilitating communication between individuals and groups. The design decisions and algorithms used by social media platforms can have a significant impact on the content that users see and engage with, which can ultimately shape the way people think and act.

    1. As a social media user, we hope you are informed about things like: how social media works, how they influence your emotions and mental state, how your data gets used or abused, strategies in how people use social media, and how harassment and spam bots operate. We hope with this you can be a more informed user of social media, better able to participate, protect yourself, and make it a valuable experience for you and others you interact with. For example, you can hopefully recognize when someone is intentionally posting something bad or offensive (like the bad cooking videos we mentioned in the Virality chapter, or an intentionally offensive statement) in an attempt to get people to respond and spread their content. Then you can decide how you want to engage (if at all) given how they are trying to spread their content.

      n some cases, engaging with such content can unintentionally contribute to its spread and popularity, which can ultimately harm the overall social media experience. It is important to be mindful of the intention behind the content and to decide whether engaging with it is worth potentially amplifying the negative message.

    1. Colonialism in Tech# The tech industry is full of colonialist thinking and practices, some more subtle than others. To begin with, much of the tech industry is centralized geographically, specifically in Silicon Valley, San Francisco, California. The leaders and decisions in how tech operates come out of this one wealthy location in a wealthy nation. Then, much of tech is dependent on exploiting cheap labor, often in dangerous conditions, in other countries (thus extracting the resource of cheap labor, from places with “inferior” governments and economies). This labor might be physical labor, or dealing with dangerous chemicals, or the content moderators who deal with viewing horrific online content. Tech industry leaders in Silicon Valley then take what they made with exploited labor, and sell it around the world, feeling good about themselves, believing they are benefitting the world with their “superior” products.

      The industry's reliance on this cheap labor often comes at the cost of the health and safety of workers, as well as the economic stability of the countries where they are located. Additionally, the tech industry's approach to global expansion can be seen as a form of cultural colonialism, with Silicon Valley leaders assuming that their products and values are universally superior and exporting them without much consideration for local contexts or perspectives.

    1. Privacy Concerns# Another source of responses to Meta (and similar social media sites), is concern around privacy (especially in relation to surveillance capitalism). The European Union passed the General Data Protection Regulation (GDPR) law, which forces companies to protect user information in certain ways, and give users a “right to be forgotten” online. Apple also is concerned about privacy, so it introduced app tracking transparency in 2021. In response, Facebook says Apple iOS privacy change will result in $10 billion revenue hit this year. Note that Apple can afford to be concerned with privacy like this because it does not make much money off of behavioral data. Instead, Apple’s profits are mostly from hardware (e.g., iPhone) and services (e.g., iCloud, Apple Music, Apple TV+).

      Privacy concerns have become a significant issue for social media platforms in recent years, and companies like Facebook have faced criticism for their data handling practices. The introduction of GDPR and other similar laws around the world has increased pressure on companies to be more transparent about their data collection and usage policies. Apple's introduction of app tracking transparency is another example of this trend. Many people are now choosing to use alternative social media platforms that prioritize privacy and security, such as Mastodon and other decentralized networks.

  2. Feb 2023
    1. 17.6. Stopping Harassment?# So how can platforms and individuals stop themselves from being harassed? Well, individuals can block or mute harassers, but the harassers may be a large group, or they might make new accounts. They might also try to use the legal system, but online harassment is often not taken seriously, and harassers often use tactics that avoid being illegal. The platform itself sometimes can be helpful. Reporting harassment might result in the user being banned, or the platform might decide to take out entire problematic sections, such as when Reddit banned its most toxic subreddits, and found it reduced toxic behavior on the site overall. There are also other tools to help individuals that are getting harassment from a crowd. For example, the Twitter app “block-party” supports mass blocking and other advanced features.

      Additionally, platforms can take proactive measures to prevent harassment in the first place. This includes implementing clear and comprehensive community guidelines, providing education and resources for users on how to identify and report harassment, and investing in moderation resources to quickly and effectively respond to reports of harassment.

    1. Moderation and Violence# You might remember from Chapter 14 that social contracts, whether literal or metaphorical, involve groups of people all accepting limits to their freedoms. Because of this, some philosophers say that a state or nation is, fundamentally, violent. Violence in this case refers to the way that individual Natural Rights and freedoms are violated by external social constraints. This kind of violence is considered to be legitimated by the agreement to the social contract. This might be easier to understand if you imagine a medical scenario. Say you have broken a bone and you are in pain. A doctor might say that the bone needs to be set; this will be painful, and kind of a forceful, “violent” action in which someone is interfering with your body in a painful way. So the doctor asks if you agree to let her set the bone. You agree, and so the doctor’s action is construed as being a legitimate interference with your body and your freedom. If someone random just walked up to you and started pulling at the injured limb, this unagreed violence would not be considered legitimate. Likewise, when medical practitioners interfere with a patient’s body in a way that is non-consensual or not what the patient agreed to, then the violence is considered illegitimate, or morally bad. We tend to think of violence as being another “normatively loaded” word, like authenticity. But where authenticity is usually loaded with a positive connotation–on the whole, people often value authenticity as a good thing–violence is loaded with a negative connotation. Yes, the doctor setting the bone is violent and invasive, but we don’t usually call this “violence” because it is considered to be a legitimate exercise of violence. Instead, we reserve the term “violence” mostly for describing forms of interference that we consider to be morally bad.

      When it comes to moderation, violence can be an issue because the act of moderating, or enforcing social contracts, can be seen as a form of interference or constraint on individual freedoms. This is why some people may view moderation as inherently violent, even if it is carried out in a peaceful manner.

    1. Intersectionality# As we look at the above examples we can see examples of intersectionality, which means that not only are people are treated differently based on their identities (e.g., race, gender, class, disability, weight, height, etc.), but combinations of those identities can compound unfair treatment in complicated ways. For example, you can test a resume filter and find that it isn’t biased against black people, and it isn’t biased against women. But it might turn out that it is still biased against black women. This could happen because the filter “fixed” the gender and race bias by over-selecting white women and black men, while under-selecting black women. Key figures:

      Intersectionality refers to the complex and interconnected ways in which different forms of discrimination and oppression intersect and compound, creating unique experiences of disadvantage for individuals who belong to multiple marginalized groups. It recognizes that social identities such as race, gender, class, disability, sexual orientation, and others are not separate and distinct, but rather intersect and influence one another in complex ways.

    2. While anyone is vulnerable to harassment online (and offline as well), some people and groups are much more prone to harassment, particularly marginalized and oppressed people in a society. Historically of course, different demographic groups have been subject to harassment or violence, such as women, LGBTA+ people, and Black people (e.g., the FBI trying to convince Martin Luther King Jr. to commit suicide).

      Overall, anyone can be a target of online harassment, but marginalized and oppressed groups are at a higher risk due to systemic discrimination and bias in society. It is important to acknowledge and address this reality and work towards creating a safer and more inclusive online environment for all.

    1. 16.3.1. “Solving” a “Problem”# When social media users work together, we can consider what problem they are solving. For example, for some of the Tiktok Duet videos from the virality chapter, the “problem” would be something like “how do we create music out of this source video” and the different musicians contribute their own piece to the solution. For some other examples: In the case of a missing hiker rescued after Twitter user tracks him down using his last-sent photo, the “problem” was “where did the hiker disappear?” and the crowd investigated whatever they could to find the solution of the hiker’s location. In the case of Canucks’ staffer uses social media to find fan who saved his life, the “problem” was “who is the fan who saved the Canucks’ staffer’s life?” and the solution was basically to try to identify and dox the fan (though hopefully in a positive way). In the case of Twitter tracks down mystery couple in viral proposal photos, the problem was “who is the couple in the photo?” and the solution was again to basically dox them, though in the article they seemed ok with it.

      t's important to note that in some of these cases, the solution involves doxxing, which is the practice of searching for and publishing private or identifying information about an individual without their consent. While doxxing can sometimes be used for positive purposes, such as locating missing persons, it can also be used for harassment or other malicious purposes. It's important to use caution and ethical considerations when using social media to solve problems

    1. When tasks are done through large groups of people making relatively small contributions, this is called crowdsourcing. The people making the contributions generally come from a crowd of people that aren’t necessarily tied to the task (e.g., all internet users can edit Wikipedia), but then people from the crowd either get chosen to participate, or volunteer themselves. When a crowd is providing financial contributions, that is called crowdfunding (e.g., patreon, kickstarter, gofundme). Humans have always collaborated on tasks, and crowds have been enlisted in performing tasks long before the internet existed. What social media (and other internet systems) have done is expand the options for how people can collaborate on tasks.

      Crowdsourcing is often used as a way to harness the collective intelligence and creativity of a large group of people, and can be a powerful tool for companies, organizations, and individuals looking to tap into the wisdom of the crowd. Crowdsourcing can also be used for social and political activism, such as the crowdsourced mapping efforts that have been used in disaster response and humanitarian aid.

    1. Moderation Tools# We’ve looked at what type of content is moderated, now let’s look at how it is moderated. Sometimes individuals are given very little control over content moderation or defense from the platform, and then the only advice that is useful is: “don’t read the comments.” But some have argued that this shifts responsibility onto the individual users getting negative comments, when the responsibility should be on the people in charge of creating the platform. So let’s look at the type of content moderation controls that might be given to individuals, and might be used by platforms.

      Content moderation controls can vary depending on the platform and its policies. Some platforms may provide more robust moderation tools, while others may rely on community reporting and self-moderation.

    1. Governments might also have rules about content moderation and censorship, such as laws in the US against Child Sexual Abuse Material (CSAM). China additionally censors various news stories in their country, like stories about protests. In addition to banning news on their platforms, in late 2022 China took took advantage of Elon Musk having fired almost all Twitter content moderators to hide news of protests by flooding Twitter with spam and porn.

      Government censorship refers to the act of a government controlling or limiting access to information or communication, typically through laws or regulations. While censorship may be justified in certain cases such as preventing the spread of illegal content, hate speech or false information, it can also be used to restrict freedom of expression and suppress dissenting voices.

    1. Munchausen by Internet# Munchausen Syndrome (or Factitious disorder imposed on self) is when someone pretends to have a disease, like cancer, to get sympathy or attention. People with various illnesses often find support online, and even form online communities. It is often easier to fake an illness for an online community than in an in-person community, so many have done so (like the fake professor @Sciencing_Bi in the authenticity chapter). People who fake these illnesses often do so as a result of their own mental illness, so, in fact, “they are sick, albeit it in a very different way than claimed.”

      This can take many forms, from publishing information about fake medical problems on social media to joining the online support group and pretending to be a member with special circumstances. Monjosen's motivation for using the Internet may be different, but it is often related to mental health problems such as depression or anxiety. Some people may seek attention or recognition, while others may try to control their environment or escape from daily life. In some cases, patients may have real physical conditions, but exaggerated or fabricated symptoms to attract attention.

    1. Now let’s look at some of the more healthy sides of social media use. First let’s consider that, while social media use is often talked as “addiction” or “junk food,” there might be better ways to think about social media use, as a place that you might enjoy, connect with others, learn new things, and express yourself.

      Social media can be a way to connect with people who share similar experiences and struggles, and this can help reduce feelings of isolation and provide a sense of belonging. In fact, social media can provide a sense of social connectedness that some people may not have in their offline lives.

    1. Books# The book Writing on the Wall: Social Media - The First 2,000 Years describes how, before the printing press, when someone wanted a book, they had to find someone who had a copy and have a scribe make a copy. So books that were popular spread through people having scribes copy each others books. And with all this copying, there might be different versions of the book spreading around, because of scribal copying errors, added notes, or even the original author making an updated copy. So we can look at the evolution of these books: which got copied, and how they changed over time.

      2000 years ago, when people wanted a book, it was very difficult and needed manual copying, so they would make mistakes. Now with the emergence of social development printing press and social media platform, these errors will not happen, so the role of social media online is very powerful.

    1. When social media platforms show users a series of posts, updates, friend suggestions, ads, or anything really, they have to use some method of determining which things to show users. The method of determining what is shown to users is called a recommendation algorithm, that is an algorithm (a series of steps or rules, such as in a computer program) that recommends posts for users to see, people for users to follow, ads for users to view, or reminders for users.

      The recommendation algorithm allows people to obtain more benefits and more suggestions from social platforms, which takes into account many factors, such as people's preferences or concerns. So this intelligent calculation method allows society to obtain more economic impetus.

    1. 1.2.1. Individual vs. Systemic Analysis# Individual analysis focuses on the behavior, bias, and responsibility an individual has, while systemic analysis focuses on the how organizations and rules may have their own behaviors, biases, and responsibility that aren’t necessarily connected to what any individual inside intends. For example, there were differences in US criminal sentencing guidelines between crack cocaine vs. powder cocaine in the 90s. The guidelines suggested harsher sentences on the version of cocaine more commonly used by Black people, and lighter sentences on the version of cocaine more commonly used by white people. Therefore, when these guidelines were followed, they had have racially biased (that is, racist) outcomes regardless of intent or bias of the individual judges. (See: https://en.wikipedia.org/wiki/Fair_Sentencing_Act).

      The difference between individual and systemic analysis lies in the focus of the analysis. Individual analysis focuses on the actions and behavior of an individual person, while systemic analysis focuses on the broader systems and structures in which individuals operate.

    1. Come up with at least two different theoretical sets of rules (recommendation algorithms) for what would make a “good” social media post to recommend. Consider all the information you could get about a post, both from the social media API, but also information social media company has internally. For example post engagement Information about the user Information about the topic (that we can try to guess) Other data mining strategies (like Sentiment Analysis) Compare your two different strategies, and think about how users might try to behave in order to game the algorithm

      it's important for social media companies to be transparent about their recommendation algorithms and to continually monitor and adjust them to prevent manipulation and ensure that users receive the most valuable and relevant recommendations.

    1. In how we’ve been talking about accessible design, the way we’ve been phrasing things has implied a separation between designers who make things, and the disabled people who things are made for. And unfortunately, as researcher Dr. Cynthia Bennett points out, disabled people are often excluded from designing for themselves, or even when they do participate in the design, they aren’t considered to be the “real designers.” You can see Dr. Bennet’s research talk on this in the following Youtube Video:

      This can lead to designs that do not fully meet their needs and can result in a lack of accessibility. Including disabled people in the design process, as well as considering their perspectives and needs, is crucial in creating accessible and inclusive designs. By considering the experiences and needs of disabled people in the design process, designers can create more inclusive and effective solutions.

    1. Those with disabilities often find ways to cope with their disability, that is, find ways to work around difficulties they encounter and seek out places and strategies that work for them (whether realizing they have a disability or not). Additionally, people with disabilities might change their behavior (whether intentionally or not) to hide the fact that they have a disability, which is called masking, and may take a mental or physical toll on the person masking, which others around them wont realize.

      People with disabilities often find ways to cope and manage their disability in a world that may not always be accommodating to their needs. This can lead to the practice of "masking" where they change their behavior to hide their disability, which can be mentally or physically draining. The burden of managing their disability is placed solely on the person with the disability, rather than the environment being adapted to accommodate them.

  3. Jan 2023
    1. Metadata: Sometimes the metadata that comes with content might violate someone’s privacy. For example, in 2012, former tech CEO John McAfee was a suspect in a murder in Belize, John McAfee hid out in secret. But when Vice magazine wrote an article about him, the photos in the story contained metadata with the exact location in Guatemala.

      It's important for individuals to be aware of this and to remove any metadata from their files before sharing them online or with others. Companies also have a responsibility to properly handle metadata and ensure that it doesn't compromise the privacy of their users.

    1. or example, the proper security practice for storing user passwords is to use a special individual encryption for each password. This way the database can only confirm that a password was the right one, but it can’t independently up what the password is. Therefore if someone had access to the database, the only way to figure out the right password is to use “brute force,” that is, keep guessing passwords until they guess the right one (and each guess takes a lot of time)

      This highlights the importance of being careful with personal information shared online, and of companies implementing proper security measures to protect it. Unfortunately, data breaches and security lapses are common, and it's crucial for users to be proactive in securing their information. This can include using strong, unique passwords and enabling two-factor authentication, as well as regularly monitoring for any suspicious activity on their accounts.

    1. When we’ve been accessing Reddit through Python and the “PRAW” code library. The praw code library works by sending requests across the internet to Reddit, using what is called an “application programming interface” or API for short. APIs have a set of rules for what requests you can make, what happens when you make the request, and what information you can get back. If you are interested in learning more about what you can do with praw and what information you can get back, you can look at the official documentation for those. But be warned they are not organized in a friendly way for newcomers and take some getting used to to figure out what these documentation pages are talking about.

      API's have a set of endpoints, which are the URLs that the API can be accessed from, and each endpoint has a set of methods that can be used to access and retrieve information. PRAW abstracts away the complexity of making these requests and handling the responses, making it easier to access and work with data from Reddit.

    1. Social Media platforms use the data they collect on users and infer about users to increase their power and increase their profits. One of the main goals of social media sites is to increase the time users are spending on their social media site. The more time users spend, the more money the site can get from ads, and also the more power and influence those social media sites have over those users. So social media sites use the data they collect to try and figure out what keeps people using their site, and what can they do to convince those users they need to open it again later.

      Social media platforms collect data on users to understand their interests and behaviors in order to tailor their content and advertising. This helps them increase user engagement and revenue from targeted advertising. Additionally, by understanding user behavior and preferences, social media platforms can also improve their platform's user experience, which can lead to increased user retention and engagement.

    1. One of the traditional pieces of advice to dealing with trolls is “Don’t feed the trolls,” which means that if you don’t respond to trolls, they will get bored and stop trolling. We can see this advice as well in the trolling community’s own “Rules of the Internet”:

      "Don't feed the trolls" is a common piece of advice for dealing with online trolls, which suggests that if you don't respond to their provocative or offensive behavior, they will eventually lose interest and move on. The idea is that trolls thrive on attention and engagement, and if they don't get the reaction they're looking for, they will eventually give up.

    1. Gatekeeping: Some trolling is done in a community to separate out an ingroup from outgroup (sometimes called newbies or normies). The ingroup knows that a post is just trolling, but the outgroup is not aware and will engage earnestly. This is sometimes known as trolling the newbies.

      Gatekeeping is a form of trolling in which an established community uses certain behaviors or language to separate themselves from newcomers or outsiders. This can include using inside jokes or references that only members of the group are aware of, or making it difficult for new members to participate in discussions or activities. The goal of this behavior is often to create a sense of exclusivity or elitism among members of the group, and to discourage or exclude outsiders.

    1. 6.4.1. Context Collapse# Since we have different personas and ways of behaving in different groups of people, what happens if different groups of people are observing you at the same time? For example, someone might not know how to behave if they were at a restaurant with their friends and they noticed that their parents were seated at the table next to them. This is phenomenon is called “context collapse.” On social media, context collapse is a common concern, since on a social networking site you might be connected to very different people (family, different groups of friends, co-workers, etc.). Additionally, something that was shared within one context (like a private message), might get reposted in another context (publicly posted elsewhere).

      This can lead to confusion and discomfort as we try to navigate the different social norms and expectations of each group. On social media, context collapse is a common concern because we are often connected to a wide variety of people, including family, friends, co-workers, and others.

    1. Inauthentic behavior is when the reality doesn’t match what is being presented. There are many ways inauthnticity shows up on social media, such as: Catfishing: Create a fake profile that doesn’t match the actual user, usually in an attempt to trick or scam someone Sockpuppet (related to a burner account): Creating a fake profile in order to argue a position (sometimes intentionally argued poorly to make the position look bad) Astroturfing: An artificially created crowd to make something look like it has popular support Parody accounts: An account that is intentionally mimicking a person or position, but intended to be understood as fake. Schrodinger’s asshole: the guy who says awful shit, and decides if he was “only kidding” depending on your reaction.

      Inauthentic behavior on social media refers to actions that involve presenting false or misleading information in order to deceive or manipulate others. A parody account is a social media account that intentionally mimics a real person or organization, usually for comedic or satirical purposes. The account is intended to be understood as fake, and its purpose is to make fun of or mock the person or organization it is mimicking.

    1. 5.3. Web 2.0 Social Media# In the first decade of the 2000s the way websites worked on the Internet went through a transition to what is called “Web 2.0.” In Web 2.0 websites (and web applications), the communication platforms and personal profiles merged. Many websites now let you create a profile, form connections, and participate in discussions with other members of the site. Platforms for hosting content without having to create your own website (like Blogs) emerged. And all of these websites became much more interactive, with updates appearing on users’ screens without the user having to request them.

      This led to the development of interactive and dynamic websites that allowed users to create profiles, form connections, participate in discussions, and share content. Blogging platforms like WordPress and Blogger emerged, which made it easier for individuals to create and share content without the need for technical expertise. Furthermore, web technologies such as AJAX allowed for more seamless and dynamic interactions, such as real-time updates, without the need for page refreshes.

    1. Around the same time, phone texting capabilities (SMS) starting becoming popular as another way to send messages to your friends, family and aquaintances. Additionally, many news sites and fan pages started adding built-in comment sections on their articles and bulliten boards for community discussion.

      This was a significant development in the history of mobile communication and greatly increased the convenience of staying in touch with others. Additionally, the rise of the internet and the development of web technologies allowed for the creation of interactive websites with features such as comment sections and bulletin boards, which allowed for greater community engagement and discussion.

    1. We’ve now looked at how different ways of storing data and putting constraints on data can make social media systems work better for some people than others, and we’ve looked at how this data also informs decision making and who is taken into account in ethics analyses. Given all that can be at stake in making decisions in how data will be stored and constrained, choose one type of data of data a social media site might collect (e.g., name, age, location, gender, posts you liked, etc.), and then choose two different ethics frameworks and consider what each framework would mean for someone choosing how that data will be stored and constrained.

      One type of data that a social media site might collect is a user's browsing history. This information can include the websites they visit, the links they click on, and the search queries they make. Overall, different ethical frameworks can lead to different recommendations for how data should be stored and constrained, and it is important to consider the potential impacts on all stakeholders involved when making decisions about data collection and use.

    1. As you can see in the apple example, any time we turn something into data, we are making a simplification.1 If we are counting the number of something, like apples, we are deciding that each one is equivalent. If we are writing down what someone said, we are losing their tone of voice, accent, etc. If we are taking a photograph, it is only from one perspective, etc. Different simplifications are useful for different tasks. Any given simplification will be helpful for some task and be unhelpful for others. See also, this saying in statistics: All models are wrong, but some are useful

      Simplifying data is an important part of the data analysis process because it allows us to make sense of large and complex sets of information. However, it is important to be mindful of the limitations of the simplifications that are made and how they may impact the results of the analysis. The quote you mentioned, "all models are wrong, but some are useful," highlights the idea that no model can perfectly represent reality, but some models can be useful for making predictions or understanding complex systems.

    1. ome platforms are used for sharing text and pictures (e.g., Facebook, Twitter, LinkedIn, WeChat, Weibo, QQ), some for sharing video (e.g., Youtube, TickTock), some for sharing audio (e.g., Clubhouse), some for sharing fanfiction (e.g., Fanfiction.net, AO3), some for gathering and sharing knowledge (e.g., wikipedia, Quora, StackOverflow), some for sharing erotic content (e.g, OnlyFans).

      Social media can influence our world and us in both good and bad ways. The decisions made by the creators of social media platforms can affect how it is used and what people do, such as how algorithms can be used to curate content and shape user behavior. Social media bots can change the dynamics on the platform by creating automated accounts that can interact with users and spread misinformation. Social media data can be used to learn about people's interests, behaviors, and opinions. There are ethical trade-offs to.

    2. Platforms can also be tailored for specific groups of people, like a social media platforms for low-income blind people in India.

      Social media bots can change the dynamics on the platform by creating automated accounts that can interact with users and spread misinformation. They can also be used to manipulate public opinion by creating false impressions of consensus or by amplifying certain messages.