37 Matching Annotations
  1. Mar 2024
    1. We hope that by the end of this book you know a lot of social media terminology (e.g., context collapse, parasocial relationships, the network effect, etc.), that you have a good overview of how social media works and is used, and what design decisions are made in how social media works, and the consequences of those decisions. We also hope you are able to recognize how trends on internet-based social media tie to the whole of human history of being social and can apply lessons from that history to our current situations.

      This book familiarizes me with rich social media terminology, understands the complexity of its functions, and comprehends the design decisions and consequences that affect its operation.

    2. We hope that by the end of this book you know a lot of social media terminology (e.g., context collapse, parasocial relationships, the network effect, etc.), that you have a good overview of how social media works and is used, and what design decisions are made in how social media works, and the consequences of those decisions. We also hope you are able to recognize how trends on internet-based social media tie to the whole of human history of being social and can apply lessons from that history to our current situations.

      This book familiarizes me with rich social media terminology, understands the complexity of its functions, and comprehends the design decisions and consequences that affect its operation.

    1. the first chapter of our book we quoted actor Kumail Nanjiani on tech innovators’ lack of consideration of ethical implications of their work. Of course, concerns about the implications of technological advancement are nothing new. In Plato’s Phaedrus (~370BCE), Socrates tells (or makes up1) a story from Egypt critical of the invention of writing: Now in those days the god Thamus was the king of the whole country of Egypt, […] [then] came Theuth and showed his inventions, desiring that the other Egyptians might be allowed to have the benefit of them; […] [W]hen they came to letters, This, said Theuth, will make the Egyptians wiser and give them better memories; it is a specific both for the memory and for the wit. Thamus replied: […] this discovery of yours will create forgetfulness in the learners’ souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. The specific which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality. In England in the early 1800s, Luddites were upset that textile factories were using machines to replace them, leaving them unemployed, so they sabotaged the machines. The English government sent soldiers to stop them, killing and executing many. (See also Sci-Fi author Ted Chiang on Luddites and AI) Fig. 21.1 The start of an xkcd comic compiling a hundred years of complaints about how technology has speed up the pace of life. (full transcript of comic available at explainxkcd)# Inventors ignoring the ethical consequences of their creations is nothing new as well, and gets critiqued regularly: Fig. 21.2 A major theme of the movie Jurassic Park (1993) is scientists not thinking through the implications of their creations.# Fig. 21.3 Tweet parodying how tech innovator often do blatantly unethical things# Many people like to believe (or at least convince others) that they are doing something to make the world a better place, as in this parody clip from the Silicon Valley show (the one Kumail Nanjiani was on, though not in this clip): But even people who thought they were doing something good regretted the consequences of their creations, such as Eli Whitney who hoped his invention of the cotton gin would reduce slavery in the United States, but only made it worse, or Alfred Nobel who invented dynamite (which could be used in construction or in war) and decided to create the Nobel prizes, or Albert Einstein regretting his role in convincing the US government to invent nuclear weapons, or Aza Raskin regretting his invention infinite scroll.

      This series of citations profoundly reveals the issue of ethical considerations being overlooked in technological innovation, which is constantly presented throughout history. It is worth considering that innovators often fail to anticipate the moral challenges that their works may bring.

    1. Meta’s way of making profits fits in a category called Surveillance Capitalism. Surveillance capitalism began when internet companies started tracking user behavior data to make their sites more personally tailored to users. These companies realized that this data was something that they could profit from, so they began to collect more data than strictly necessary (“behavioral surplus”) and see what more they could predict about users. Companies could then sell this data about users directly, or (more commonly), they could keep their data hidden, but use it to sell targeted advertisements. So, for example, Meta might let an advertiser say they want an ad to only go to people likely to be pregnant. Or they might let advertizes make ads go only to “Jew Haters” (which is ethically very bad, and something Meta allowed).

      Social media faces a challenge: enhancing user experience with algorithms while tackling filter bubbles. Prioritizing diverse content, user control, and transparency can help mitigate polarization and biased information.

    2. Meta’s way of making profits fits in a category called Surveillance Capitalism. Surveillance capitalism began when internet companies started tracking user behavior data to make their sites more personally tailored to users. These companies realized that this data was something that they could profit from, so they began to collect more data than strictly necessary (“behavioral surplus”) and see what more they could predict about users. Companies could then sell this data about users directly, or (more commonly), they could keep their data hidden, but use it to sell targeted advertisements. So, for example, Meta might let an advertiser say they want an ad to only go to people likely to be pregnant. Or they might let advertizes make ads go only to “Jew Haters” (which is ethically very bad, and something Meta allowed).

      Social media faces a challenge: enhancing user experience with algorithms while tackling filter bubbles. Prioritizing diverse content, user control, and transparency can help mitigate polarization and biased information.

    1. used to think that if we just gave people a voice and helped them connect, that would make the world better by itself. In many ways it has. But our society is still divided. Now I believe we have a responsibility to do even more. It’s not enough to simply connect the world, we must also work to bring the world closer together. Mark Zuckerberg, March 15, 2021 Meta now has a mission statement of “give people the power to build community and bring the world closer together.” But is this any better?

      Zuckerberg acknowledges the limits of connectivity. Meta's new mission emphasizes community and global unity, but its effectiveness hinges on actionable steps.

    1. Professor Kate Starbird regularly called for Twitter to introduce a retract button. This would help with misinformation, as a user who realized they posted false information could leave a tweet up, but put a retraction over it. It also would solve a dilemma where people who tweeted something they regretted felt caught between the choice of deleting a tweet (making it look like they were hiding their history), or leaving it up (looking like they stood by their bad tweet). Therefore a retraction feature could be used by someone who was being publicly shamed as a means of apologizing. So now, it’s your turn to think about how you would want a retraction feature to work on a social media site like Twitter: How would a user do the retraction? What options would they have (e.g., can they choose to keep or delete the original tweet content)? What additional information would they be able to provide? How would that retracted tweet look when viewed? How would that retracted tweet look when it is part of a retweet or quote tweet? Would there be any notifications sent when a tweet is retracted? Outline 3 different examples of how and when a user might retract a tweet E.g., misinformation, regret a bad idea, regret mean tone, etc.

      Introducing a retraction feature is an innovative idea that empowers users to correct mistakes or apologize. It fosters a more transparent and accountable social media environment, allowing users to address inappropriate content without resorting to deletion or preserving contentious tweets.

    1. We previously looked at how shame might play out in childhood development, let’s look at different views of people being shamed in public. 18.3.1. Weak against strong# Jennifer Jacquet argues that shame can be morally good as a tool the weak can use against the strong: The real power of shame is it can scale. It can work against entire countries and can be used by the weak against the strong. Guilt, on the other hand, because it operates entirely within individual psychology, doesn’t scale. […] We still care about individual rights and protection. Transgressions that have a clear impact on broader society – like environmental pollution – and transgressions for which there is no obvious formal route to punishment are, for instance, more amenable to its use. It should be reserved for bad behaviour that affects most or all of us. […] A good rule of thumb is to go after groups, but I don’t exempt individuals, especially not if they are politically powerful or sizeably impact society. But we must ask ourselves about the way those individuals are shamed and whether the punishment is proportional. Jennifer Jacquet: ‘The power of shame is that it can be used by the weak against the strong’

      This perspective sparks profound reflection on shame as a tool for the weak against the strong. Shame seems to possess scalability when addressing transgressions with broad societal impacts, yet the crucial aspect lies in ensuring proportionality in punishments for both individuals and groups. Simultaneously, the satirical article humorously highlights society's tendency to treat celebrities' misfortunes as entertainment, providing a cynical view on public shaming.

  2. Feb 2024
    1. amergate# Gamergate was a harassment campaign in 2014-2015 that targeted non-men in gaming: Zoë Quinn, Brianna Wu, and Anita Sarkeesian. The harassment was justified by various false claims (e.g., journalistic malpractice), but mostly motivated by either outright misogyny or feeling threatened by critiques of games/gaming culture from a not straight-white-male viewpoint. The video below talks about how two factions within gamergate fed off each other (you can watch the whole gamergate series here)

      The "Gamergate" incident in the gaming industry has sparked a discussion about diversity and inclusion, highlighting some of the challenges in gaming. We should strive to create a more inclusive and respectful gaming community.

    2. amergate# Gamergate was a harassment campaign in 2014-2015 that targeted non-men in gaming: Zoë Quinn, Brianna Wu, and Anita Sarkeesian. The harassment was justified by various false claims (e.g., journalistic malpractice), but mostly motivated by either outright misogyny or feeling threatened by critiques of games/gaming culture from a not straight-white-male viewpoint. The video below talks about how two factions within gamergate fed off each other (you can watch the whole gamergate series here)

      The "Gamergate" incident in the gaming industry has sparked a discussion about diversity and inclusion, highlighting some of the challenges in gaming. We should strive to create a more inclusive and respectful gaming community.

    3. Gamergate# Gamergate was a harassment campaign in 2014-2015 that targeted non-men in gaming: Zoë Quinn, Brianna Wu, and Anita Sarkeesian. The harassment was justified by various false claims (e.g., journalistic malpractice), but mostly motivated by either outright misogyny or feeling threatened by critiques of games/gaming culture from a not straight-white-male viewpoint. The video below talks about how two factions within gamergate fed off each other (you can watch the whole gamergate series here)

      The "Gamergate" incident in the gaming industry has sparked a discussion about diversity and inclusion, highlighting some of the challenges in gaming. We should strive to create a more inclusive and respectful gaming community.

    4. Billionaires# One phrase that became popular on Twitter in 2022, especially as Elon Musk was in the process of buying Twitter, was: “It is always morally correct to bully billionaires.” (Note: We could not find the exact origins of this phrase or its variations). This is related to the concept in comedy of “punching up,” that is, making fun of people in positions of relatively more power.

      The phrase may reflect a view on the concentration of power and the distribution of wealth, and is related to the concept of "attacking the stronger" in comedy.

    1. Crowdsourcing isn’t always pre-planned or designed for. Sometimes a crowd stumbles into crowd tasks in an unplanned, ad hoc manner. Like identifying someone and sharing the news in this scene from the movie Crazy Rich Asians: 16.3.1. “Solving” a “Problem”# When social media users work together, we can consider what problem they are solving. For example, for some of the Tiktok Duet videos from the virality chapter, the “problem” would be something like “how do we create music out of this source video” and the different musicians contribute their own piece to the solution. For some other examples: In the case of a missing hiker rescued after Twitter user tracks him down using his last-sent photo, the “problem” was “Where did the hiker disappear?” and the crowd investigated whatever they could to find the solution of the hiker’s location. In the case of Canucks’ staffer uses social media to find fan who saved his life, the “problem” was “Who is the fan who saved the Canucks’ staffer’s life?” and the solution was basically to try to identify and dox the fan (though hopefully in a positive way). In the case of Twitter tracks down mystery couple in viral proposal photos, the problem was “Who is the couple in the photo?” and the solution was again to basically dox them, though in the article they seemed ok with it.

      It's fascinating how a small group often drives contributions in online communities. I wonder what motivates this dynamic—maybe it's a combination of specialization and access to resources. It leaves me curious about ways to encourage more widespread engagement for a more balanced participation.

    2. When social media users work together, we can consider what problem they are solving. For example, for some of the Tiktok Duet videos from the virality chapter, the “problem” would be something like “how do we create music out of this source video” and the different musicians contribute their own piece to the solution. For some other examples: In the case of a missing hiker rescued after Twitter user tracks him down using his last-sent photo, the “problem” was “Where did the hiker disappear?” and the crowd investigated whatever they could to find the solution of the hiker’s location. In the case of Canucks’ staffer uses social media to find fan who saved his life, the “problem” was “Who is the fan who saved the Canucks’ staffer’s life?” and the solution was basically to try to identify and dox the fan (though hopefully in a positive way). In the case of Twitter tracks down mystery couple in viral proposal photos, the problem was “Who is the couple in the photo?” and the solution was again to basically dox them, though in the article they seemed ok with it. 16.3.2. Well-Intentioned Harm# Sometimes even well-intentioned efforts can do significant harm. For example, in the immediate aftermath of the 2013 Boston Marathon bombing, FBI released a security photo of one of the bombers and asked for tips. A group of Reddit users decided to try to identify the bomber(s) themselves. They quickly settled on a missing man (Sunil Tripathi) as the culprit (it turned out had died by suicide and was in no way related to the case), and flooded the Facebook page set up to search for Sunil Tripathi, causing his family unnecessary pain and difficulty. The person who set up the “Find Boston Bomber” Reddit board said “It Was a Disaster” but “Incredible”, and Reddit apologized for online Boston ‘witch hunt’.

      When people team up on social media to tackle challenges, it's cool to see the group problem-solving. Yet, the mix-up in the Boston bombing aftermath reminds us that good intentions can go wrong, teaching us to be careful with online investigations.

    1. When looking at who contributes in crowdsourcing systems, or with social media in generally, we almost always find that we can split the users into a small group of power users who do the majority of the contributions, and a very large group of lurkers who contribute little to nothing. For example, Nearly All of Wikipedia Is Written By Just 1 Percent of Its Editors, and on StackOverflow “A 2013 study has found that 75% of users only ask one question, 65% only answer one question, and only 8% of users answer more than 5 questions..” We see the same phenomenon on Twitter: Fig. 16.3 Summary of Twitter use by Pew Research Center# This small percentage of people doing most of the work in some areas is not a new phenomenon. In many aspects of our lives, some tasks have been done by a small group of people with specialization or resources. Their work is then shared with others. This goes back many thousands of years with activities such as collecting obsidian and making jewelry, to more modern activities like writing books, building cars, reporting on news, and making movies.

      It's interesting how in crowdsourcing and social media, a small group of active contributors often carries the load, while a larger majority tends to lurk or contribute minimally. This dynamic, seen throughout history in various tasks, raises questions about the distribution of effort and specialization in collaborative platforms.

    1. Rawls proposed a famous thought experiment. Imagine we were going to redesign America. A huge lottery was done to gather people from all walks of life into a committee to decide how the society should be structured and how it should function. Naturally, they will all have their own interests in mind, so Rawls proposed that they all be hidden behind a “veil of ignorance”, making it so that while they are on the committee, the people have no idea who they are, or what sort of life they will have once the new design is implemented. (The veil of ignorance is not a real thing, and it is extremely unclear how such an obscuring could be accomplished, although science fiction writers have had fun trying to imagine it.) Rawls’s thought was that if you don’t know whether you will be in one of society’s more powerful roles or more disadvantaged roles, then you will have the motivation to make sure you will be okay, whatever role you get in the end. Therefore, the committee members would design a just and fair society, so that they would be okay no matter where they end up. The design the committee agrees to forms the basis of a new “social contract”, or agreement about how society works. Theoretically, a social contract would guide us in how to live safely and fairly with each other, although injustice in the social contract means that these benefits are not always achieved. By “agreeing” to a social contract, we agree to let that contract moderate our natural rights as individual moral and rational agents. Natural Rights theory says no one should restrict my freedoms. Social Contract theory says that we use our freedom to accept certain restrictions, in order to make life better for all of us.

      Rawls' thought experiment with the "veil of ignorance" cleverly challenges individuals to design a just society without knowing their future roles. The idea of a social contract, despite its theoretical nature, highlights the trade-off between individual freedoms and collective well-being.

    2. One concept that comes up in a lot of different ethical frameworks is moderation. Famously, Confucian thinkers prized moderation as a sound principle for living, or as a virtue, and taught the value of the ‘golden mean’, or finding a balanced, moderate state between extremes. This golden mean idea got picked up by Aristotle—we might even say ripped off by Aristotle—as he framed each virtue as a medial state between two extremes. You could be cowardly at one extreme, or brash and reckless at the other; in the golden middle is courage. You could be miserly and penny-pinching, or you could be a reckless spender, but the aim is to find a healthy balance between those two. Moderation, or being moderate, is something that is valued in many ethical frameworks, not because it comes naturally to us, per se, but because it is an important part of how we form groups and come to trust each other for our shared survival and flourishing. Moderation also comes up in deontological theories, including the political philosophy tradition that grew out of Kantian rationalism: the tradition that is often identified with John Rawls, although there are many other variations out there too. In brief, here is the journey of the idea: Kant was influenced by ideas that were trending in his time–the European era we call the “Enlightenment”, which became very interested in the idea of rationality. We could write books about what they meant by the idea of “rationality”, and Kant certainly did so, but you probably already have a decent idea of what rationality is about. Rationalism tries to use reasoning, logical argument, and scientific evidence to figure out what to make of the world. Kant took this idea and ran with it, exploring the question of what if everything, even morality, could be derived from looking at rationality in the abstract. Many philosophers and, let’s face it, many sensible people since Kant have questioned whether his project could succeed, or whether his question was even a good question to be asking. Can one person really get that kind of “god’s-eye view” of ultimate rationality? People disagree a lot about what would be the most rational way to live. Some philosophers even suggested that it is hard to think about what is rational or reasonable without our take being skewed by our own aims and egos. We instinctively take whatever suits our own goals and frame it in the shape of reasons. Those who do not want their wealth taxed have reasons in the shape of rational arguments for why they should not be taxed. Those who do believe wealth should be taxed have reasons in the shape of rational arguments for why taxes should be imposed. Our motivations can massively affect which of those rationales we find to be most rational. This is what John Rawls wanted to address.

      This concept of moderation in ethics, from Confucius to Kant and Rawls, emphasizes finding a balanced middle ground for our collective well-being. Kant's pursuit of morality from abstract rationality faced skepticism about achieving a universal perspective, acknowledging biases tied to personal motives. Rawls sought to tackle these issues in political philosophy.

    1. 2019 the company Facebook (now called Meta) presented an internal study that found that Instagram was bad for the mental health of teenage girls, and yet they still allowed teenage girls to use Instagram. So, what does social media do to the mental health of teenage girls, and to all its other users? The answer is of course complicated and varies. Some have argued that Facebook’s own data is not as conclusive as you think about teens and mental health. Many have anecdotal experiences with their own mental health and those they talk to. For example, cosmetic surgeons have seen how photo manipulation on social media has influenced people’s views of their appearance: People historically came to cosmetic surgeons with photos of celebrities whose features they hoped to emulate. Now, they’re coming with edited selfies. They want to bring to life the version of themselves that they curate through apps like FaceTune and Snapchat. Selfies, Filters, and Snapchat Dysmorphia: How Photo-Editing Harms Body Image Comedian and director Bo Burnham has his own observations about how social media is influencing mental health: “If [social media] was just bad, I’d just tell all the kids to throw their phone in the ocean, and it’d be really easy. The problem is it - we are hyper-connected, and we’re lonely. We’re overstimulated, and we’re numb. We’re expressing our self, and we’re objectifying ourselves. So I think it just sort of widens and deepens the experiences of what kids are going through. But in regards to social anxiety, social anxiety - there’s a part of social anxiety I think that feels like you’re a little bit disassociated from yourself. And it’s sort of like you’re in a situation, but you’re also floating above yourself, watching yourself in that situation, judging it. And social media literally is that. You know, it forces kids to not just live their experience but be nostalgic for their experience while they’re living it, watch people watch them, watch people watch them watch them. My sort of impulse is like when the 13 year olds of today grow up to be social scientists, I’ll be very curious to hear what they have to say about it. But until then, it just feels like we just need to gather the data.” Director Bo Burnham On Growing Up With Anxiety — And An Audience - NPR Fresh Air (10:15-11:20) It can be difficult to measure the effects of social media on mental health since there are so many types of social media, and it permeates our cultures even of people who don’t use it directly. Some researchers have found that people using social media may enter a dissociation state, where they lose track of time (like what happens when someone is reading a good book). Researchers at Facebook decided to try to measure how their recommendation algorithm was influencing people’s mental health. So they changed their recommendation algorithm to show some people more negative posts and some people more positive posts. They found that people who were given more negative posts tended to post more negatively themselves. Now, this experiment was done without informing users that they were part of an experiment, and when people found out that they might be part of a secret mood manipulation experiment, they were upset. 13.1.1. Digital Detox?# Some people view internet-based social media (and other online activities) as inherently toxic and therefore encourage a digital detox, where people take some form of a break from social media platforms and digital devices. While taking a break from parts or all of social media can be good for someone’s mental health (e.g., doomscrolling is making them feel more anxious, or they are currently getting harassed online), viewing internet-based social media as inherently toxic and trying to return to an idyllic time from before the Internet is not a realistic or honest view of the matter. In her essay “The Great Offline,” Lauren Collee argues that this is just a repeat of earlier views of city living and the “wilderness.” As white Americans were colonizing the American continent, they began idealizing “wilderness” as being uninhabited land (ignoring the Indigenous people who already lived there, or kicking them out or killing them). In the 19th century, as wilderness tourism was taking off as an industry, natural landscapes were figured as an antidote to the social pressures of urban living, offering truth in place of artifice, interiority in place of exteriority, solitude in place of small talk. Similarly, advocates for digital detox build an idealized “offline” separate from the complications of modern life: Sherry Turkle, author of Alone Together, characterizes the offline world as a physical place, a kind of Edenic paradise. “Not too long ago,” she writes, “people walked with their heads up, looking at the water, the sky, the sand” — now, “they often walk with their heads down, typing.” […] Gone are the happy days when families would gather around a weekly televised program like our ancestors around the campfire! But Lauren Collee argues that by placing the blame on the use of technology itself and making not using technology (a digital detox) the solution, we lose our ability to deal with the nuances of how we use technology and how it is designed: I’m no stranger to apps that help me curb my screen time, and I’ll admit I’ve often felt better for using them. But on a more communal level, I suspect that cultures of digital detox — in suggesting that the online world is inherently corrupting and cannot be improved — discourage us from seeking alternative models for what the internet could look like. I don’t want to be trapped in cycles of connection and disconnection, deleting my social media profiles for weeks at a time, feeling calmer but isolated, re-downloading them, feeling worse but connected again. For as long as we keep dumping our hopes into the conceptual pit of “the offline world,” those hopes will cease to exist as forces that might generate change in the worlds we actually live in together. So in this chapter, we will not consider internet-based social media as inherently toxic or beneficial for mental health. We will be looking for more nuance and where things go well, where they do not, and why. { requestKernel: true, binderOptions: { repo: "binder-examples/jupyter-stacks-datascience", ref: "master", }, codeMirrorConfig: { theme: "abcdef", mode: "python" }, kernelOptions: { kernelName: "python3", path: "./ch13_mental_health" }, predefinedOutput: true } kernelName = 'python3' previous 13. Mental Health next 13.2. Unhealthy Activities on Social Media By Kyle Thayer and Susan Notess © Copyright 2022.

      This paragraph talks about how social media, especially Instagram, might affect mental health, especially for teenage girls. It mentions different opinions, a study by Meta, anecdotes from cosmetic surgeons, and thoughts from comedian Bo Burnham. The passage acknowledges the complexity of measuring social media's impact and mentions a "digital detox." It doesn't label social media as entirely good or bad, aiming for a more nuanced view. I find it interesting, but the whole social media and mental health issue is really complicated.

    1. Since social media platforms can gather so much data on their users, they can try to use data mining to figure out information about their users’ moods, mental health problems, or neurotypes (e.g., ADHD, Autism). For example, Facebook has a suicide detection algorithm, where they try to intervene if they think a user is suicidal (Inside Facebook’s suicide algorithm: Here’s how the company uses artificial intelligence to predict your mental state from your posts). As social media companies have tried to detect talk of suicide and sometimes remove content that mentions it, users have found ways of getting around this by inventing new word uses, like “unalive.” Larger efforts at trying to determine emotions or mental health through things like social media use, or iPhone or iWatch use, have had very questionable results, and any claims of being able to detect emotions reliably are probably false. Additionally, these attempts at detecting mental health can be part of violating privacy or can be used for unethical surveillance, such as: your employer might detect that you are unhappy, and consider firing you since they think you might not be fully committed to the job someone might build a system that tries to detect who is Autistic, and then force them into an abusive therapy system to try and “cure” them of their Autism (see also this more scientific explanation of that linked article)

      This paragraph delves into how social media platforms utilize data mining to comprehend users' emotions and mental health. It references Facebook's suicide detection algorithm and the hurdles in precisely assessing emotions. It underscores concerns about privacy infringement and potential employer surveillance, affecting job security.

    2. Since social media platforms can gather so much data on their users, they can try to use data mining to figure out information about their users’ moods, mental health problems, or neurotypes (e.g., ADHD, Autism). For example, Facebook has a suicide detection algorithm, where they try to intervene if they think a user is suicidal (Inside Facebook’s suicide algorithm: Here’s how the company uses artificial intelligence to predict your mental state from your posts). As social media companies have tried to detect talk of suicide and sometimes remove content that mentions it, users have found ways of getting around this by inventing new word uses, like “unalive.” Larger efforts at trying to determine emotions or mental health through things like social media use, or iPhone or iWatch use, have had very questionable results, and any claims of being able to detect emotions reliably are probably false. Additionally, these attempts at detecting mental health can be part of violating privacy or can be used for unethical surveillance, such as: your employer might detect that you are unhappy, and consider firing you since they think you might not be fully committed to the job someone might build a system that tries to detect who is Autistic, and then force them into an abusive therapy system to try and “cure” them of their Autism (see also this more scientific explanation of that linked article)

      This paragraph delves into how social media platforms utilize data mining to comprehend users' emotions and mental health. It references Facebook's suicide detection algorithm and the hurdles in precisely assessing emotions. It underscores concerns about privacy infringement and potential employer surveillance, affecting job security.

    1. It isn’t clear what should be considered as “nature” in a social media environment (human nature? the nature of the design of the social media platform? are bots unnatural?), so we’ll just instead talk about selection. When content (and modified copies of content) is in a position to be replicated, there are factors that determine whether it gets selected for replicated or not. As humans look at the content they see on social media they decide whether they want to replicate it for some reason, such as: “that’s funny, so I’ll retweet it” “that’s horrible, so I’ll respond with an angry face emoji” “reposting this will make me look smart” “I am inspired to use part of this to make a different thing” Groups and organizations make their own decisions on what social media content to replicate as well (e.g., a news organization might find a social media post newsworthy, so they write articles about it). Additionally, content may be replicated because of: Paid promotion and ads, where someone pays money to have their content replicated Astroturfing: where crowds, often of bots, are paid to replicate social media content (e.g., like, retweet) Finally, social media platforms use algorithms and design layouts which determine what posts people see. There are various rules and designs social media sites can use, and they can amplify natural selection and unnatural selection in various ways. They can do this through recommendation algorithms as we saw last chapter, as well as choosing what actions are allowed and what amount of friction is given to those actions, as well as what data is collected and displayed.

      The analysis of media content selection mechanisms is very in-depth, emphasizing the influence of various human motivations, as well as external factors such as organization, advertising, and fictional grassroots movements. Your metaphor compares social media platform design to the natural environment, providing an interesting perspective on the complexity of content dissemination.

    2. For social media content, replication means that the content (or a copy or modified version) gets seen by more people. Additionally, when a modified version gets distributed, future replications of that version will include the modification (a.k.a., inheritance). There are ways of duplicating that are built into social media platforms: Actions such as: liking, reposting, replying, and paid promotion get the original posting to show up for users more Actions like quote tweeting, or the TikTok Duet feature let people see the original content, but modified with new context. Social media sites also provide ways of embedding posts in other places, like in news articles There are also ways of replicating social media content that aren’t directly built into the social media platform, such as: copying images or text and reposting them yourself taking screenshots, and cross-posting to different sites

      The summary of the content is comprehensive and provides a detailed explanation of the replication methods both inside and outside the platform. This is crucial for expanding the reach and influence of content. Different forwarding, referencing, and embedding methods enable individuals and businesses to optimize content strategies and achieve wider dissemination.

    1. Some recommendation algorithms can be simple such as reverse chronological order, meaning it shows users the latest posts (like how blogs work, or Twitter’s “See latest tweets” option). They can also be very complicated taking into account many factors, such as: Time since posting (e.g., show newer posts, or remind me of posts that were made 5 years ago today) Whether the post was made or liked by my friends or people I’m following How much this post has been liked, interacted with, or hovered over Which other posts I’ve been liking, interacting with, or hovering over What people connected to me or similar to me have been liking, interacting with, or hovering over What people near you have been liking, interacting with, or hovering over (they can find your approximate location, like your city, from your internet IP address, and they may know even more precisely) This perhaps explains why sometimes when you talk about something out loud it gets recommended to you (because someone around you then searched for it). Or maybe they are actually recording what you are saying and recommending based on that. Phone numbers or email addresses (sometimes collected deceptively) can be used to suggest friends or contacts. And probably many more factors as well! Now, how these algorithms precisely work is hard to know, because social media sites keep these algorithms secret, probably for multiple reasons: They don’t want another social media site copying their hard work in coming up with an algorithm They don’t want users to see the algorithm and then be able to complain about specific details They don’t want malicious users to see the algorithm and figure out how to best make their content go viral

      Social media platforms' recommendation algorithms are both complex and secretive, designed to precisely customize content by considering numerous factors such as posting time, interactions, and user location. While this personalization enhances user experience, the opacity of these algorithms raises concerns about privacy, data security, and content bias. Social media companies conceal algorithm details both to protect their innovations and to prevent malicious exploitation of their systems. However, this practice also presents challenges in ethics and transparency for the platforms.

    1. Content recommendations can go well when users find content they are interested in. Sometimes algorithms do a good job of it and users are appreciative. TikTok has been mentioned in particular as providing surprisingly accurate recommendations, though Professor Arvind Narayanan argues that TikTok’s success with its recommendations relies less on advanced recommendation algorithms, and more on the design of the site making it very easy to skip the bad recommendations and get to the good ones. Content recommendations can go poorly when it sends people down problematic chains of content, like by grouping videos of children in a convenient way for pedophiles, or Amazon recommending groups of materials for suicide. 11.3.2. Gaming the recommendation algorithm# Knowing that there is a recommendation algorithm, users of the platform will try to do things to make the recommendation algorithm amplify their content. This is particularly important for people who make their money from social media content. For example, in the case of the simple “show latest posts” algorithm, the best way to get your content seen is to constantly post and repost your content (though if you annoy users too much, it might backfire). Other strategies include things like: Clickbait: trying to give you a mystery you have to click to find the answer to (e.g., “You won’t believe what happened when this person tried to eat a stapler!”). They do this to boost clicks on their link, which they hope boosts them in the recommendation algorithm, and gets their ads more views Trolling: by provoking reactions, they hope to boost their content more Coordinated actions: have many accounts (possibly including bots) like a post, or many people use a hashtag, or have people trade positive reviews Youtuber F.D. Signifier explores the YouTube recommendation algorithm and interviews various people about their experiences (particularly Black Youtubers like himself) in this video (it’s very long, so we’ll put some key quotes below):

      TikTok's success lies not only in its advanced recommendation algorithms but also in its design that optimizes user experience, allowing users to quickly skip over content they're not interested in. This demonstrates the crucial role of combining excellent user interface design with algorithms to enhance the accuracy of content recommendations and user satisfaction. At the same time, the negative impacts and manipulation strategies associated with content recommendation systems remind us of the importance of considering ethics and responsibility in their design.

    1. A disability is an ability that a person doesn’t have, but that their society expects them to have.1 For example: If a building only has staircases to get up to the second floor (it was built assuming everyone could walk up stairs), then someone who cannot get up stairs has a disability in that situation. If a physical picture book was made with the assumption that people would be able to see the pictures, then someone who cannot see has a disability in that situation. If tall grocery store shelves were made with the assumption that people would be able to reach them, then people who are short, or who can’t lift their arms up, or who can’t stand up, all would have a disability in that situation. If an airplane seat was designed with little leg room, assuming people’s legs wouldn’t be too long, then someone who is very tall, or who has difficulty bending their legs would have a disability in that situation. Which abilities are expected of people, and therefore what things are considered disabilities, are socially defined. Different societies and groups of people make different assumptions about what people can do, and so what is considered a disability in one group, might just be “normal” in another. There are many things we might not be able to do that won’t be considered disabilities because our social groups don’t expect us to be able to do them. For example, none of us have wings that we can fly with, but that is not considered a disability, because our social groups didn’t assume we would be able to. Or, for a more practical example, let’s look at color vision: Most humans are trichromats, meaning they can see three base colors (red, green, and blue), along with all combinations of those three colors. Human societies often assume that people will be trichromats. So people who can’t see as many colors are considered to be color blind, a disability. But there are also a small number of people who are tetrachromats and can see four base colors2 and all combinations of those four colors. In comparison to tetrachromats, trichromats (the majority of people), lack the ability to see some colors. But our society doesn’t build things for tetrachromats, so their extra ability to see color doesn’t help them much. And trichromats’ relative reduction in seeing color doesn’t cause them difficulty, so being a trichromat isn’t considered to be a disability. Some disabilities are visible disabilities that other people can notice by observing the disabled person (e.g., wearing glasses is an indication of a visual disability, or a missing limb might be noticeable). Other disabilities are invisible disabilities that other people cannot notice by observing the disabled person (e.g., chronic fatigue syndrome, contact lenses for a visual disability, or a prosthetic for a missing limb covered by clothing). Sometimes people with invisible disabilities get unfairly accused of “faking” or “making up” their disability (e.g., someone who

      The perspective provided offers a compelling view on disability, framing it as a societal construct rather than an individual's deficiency. It highlights the importance of recognizing both visible and invisible disabilities, advocating for a more inclusive and accommodating society that considers the diverse abilities of all individuals.

    1. There are many reasons, both good and bad, that we might want to keep information private. There might be some things that we just feel like aren’t for public sharing (like how most people wear clothes in public, hiding portions of their bodies) We might want to discuss something privately, avoiding embarrassment that might happen if it were shared publicly We might want a conversation or action that happens in one context not to be shared in another (context collapse) We might want to avoid the consequences of something we’ve done (whether ethically good or bad), so we keep the action or our identity private We might have done or said something we want to be forgotten or make at least made less prominent We might want to prevent people from stealing our identities or accounts, so we keep information (like passwords) private We might want to avoid physical danger from a stalker, so we might keep our location private We might not want to be surveilled by a company or government that could use our actions or words against us (whether what we did was ethically good or bad) When we use social media platforms though, we at least partially give up some of our privacy. For example, a social media application might offer us a way of “Private Messaging” (also called Direct Messaging) with another user. But in most cases those “private” messages are stored in the computers at those companies, and the company might have computer programs that automatically search through the messages, and people with the right permissions might be able to view them directly. In some cases we might want a social media company to be able to see our “private” messages, such as if someone was sending us death threats. We might want to report that user to the social media company for a ban, or to law enforcement (though many people have found law enforcement to be not helpful), and we want to open access to those “private” messages to prove that they were sent.

      I think this phenomenon raises multiple important questions, including whether users are fully aware and agree to the level of privacy they give up when using these platforms, what security measures social media companies have taken to protect sensitive data from unauthorized access, how transparent their data usage policies are, and to what extent users have control over managing their data and privacy settings. Although in some cases, it is reasonable for social media companies to access and review messages, such as to prevent harm or respond to legal requirements, the key is to ensure that these practices are balanced with strong privacy protection measures and user control. This requires continuous dialogue between social media platforms, users, policy makers, and privacy advocates to find fair solutions that respect user privacy while providing valuable services.

    1. While we have our concerns about the privacy of our information, we often share it with social media platforms under the understanding that they will hold that information securely. But social media companies often fail at keeping our information secure. For example, the proper security practice for storing user passwords is to use a special individual encryption process for each individual password. This way the database can only confirm that a password was the right one, but it can’t independently look up what the password is or even tell if two people used the same password. Therefore if someone had access to the database, the only way to figure out the right password is to use “brute force,” that is, keep guessing passwords until they guess the right one (and each guess takes a lot of time). But while that is the proper security for storing passwords. So for example, Facebook stored millions of Instagram passwords in plain text, meaning the passwords weren’t encrypted and anyone with access to the database could simply read everyone’s passwords. And Adobe encrypted their passwords improperly and then hackers leaked their password database of 153 million users. From a security perspective there are many risks that a company faces, such as: Employees at the company misusing their access, like Facebook employees using their database permissions to stalk women Hackers finding a vulnerability and inserting, modifying, or downloading information. For example: hackers stealing the names, Social Security numbers, and birthdates of 143 million Americans from Equifax hackers posting publicly the phone numbers, names, locations, and some email addresses of 530 million Facebook users, or about 7% of all people on Earth Hacking attempts can be made on individuals, whether because the individual is the goal target, or because the individual works at a company which is the target. Hackers can target individuals with attacks like: Password reuse attacks, where if they find out your password from one site, they try that password on many other sites Hackers tricking a computer into thinking they are another site, for example: the US NSA impersonated Google Social engineering, where they try to gain access to information or locations by tricking people. For example: Phishing attacks, where they make a fake version of a website or app and try to get you to enter your information or password into it. Some people have made malicious QR codes to take you to a phishing site. Many of the actions done by the con-man Frank Abagnale, which were portrayed in the movie Catch Me If You Can

      The case reveals several important network security and privacy protection issues. Firstly, these events highlight that even large technology companies may have serious vulnerabilities in protecting user data. Improper storage and encryption measures of user passwords in plain text, as well as data leakage incidents, directly threaten the personal privacy and network security of users.

  3. Jan 2024
    1. Social media platforms collect various types of data on their users. Some data is directly provided to the platform by the users. Platforms may ask users for information like: email address name profile picture interests friends Platforms also collect information on how users interact with the site. They might collect information like (they don’t necessarily collect all this, but they might): when users are logged on and logged off who users interact with What users click on what posts users pause over where users are located what users send in direct messages to each other Online advertisers can see what pages their ads are being requested on, and track users across those sites. So, if an advertiser sees their ad is being displayed on an Amazon page for shoes, then the advertiser can start showing shoe ads to that same user when they go to another website. Additionally, social media might collect information about non-users, such as when a user posts a picture of themselves with a friend who doesn’t have an account, or a user shares their phone contact list with a social media site, some of whom don’t have accounts (Facebook does this). Social media platforms then use “data mining” to search through all this data to try to learn more about their users, find patterns of behavior, and in the end, make more money.

      He discusses the concept of false correlations, emphasizing the need to be careful when analyzing data about phenomena that appear to be related on the surface. In a large amount of data, it is easy to find things that appear to be related, but in fact these relationships may be accidental or caused by other factors. The authors use the example of candlelight commentary and COVID to illustrate the possibility of false correlations due to seasonal factors, highlighting the need for a multifaceted, in-depth analysis of the data before drawing conclusions. This paragraph is a good illustration of a common pitfall in data analysis, which is that causal conclusions should not be drawn based on superficial correlations alone. This is very helpful for understanding the complexity and multi-dimensionality of data analysis.

    1. One thing to note in the above case of candle reviews and COVID is that just because something appears to be correlated, doesn’t mean that it is connected in the way it looks like. In the above, the correlation might be due mostly to people buying and reviewing candles in the fall, and diseases, like COVID, spreading most during the fall. It turns out that if you look at a lot of data, it is easy to discover spurious correlations where two things look like they are related, but actually aren’t. Instead, the appearance of being related may be due to chance or some other cause. For example: Fig. 8.3 An example spurious correlation from Tyler Vigen’s collection of Spurious Correlations# By looking at enough data in enough different ways, you can find evidence for pretty much any conclusion you want. This is because sometimes different pieces of data line up coincidentally (coincidences happen), and if you try enough combinations, you can find the coincidence that lines up with your conclusion. If you want to explore the difficulty of inferring trends from data, the website fivethirtyeight.com has an interactive feature called “Hack Your Way To Scientific Glory” where, by changing how you measure the US economy and how you measure what political party is in power in the US, you can “prove” that either Democrats or Republicans are better for the economy. Fivethirtyeight has a longer article on this called “Science Isn’t Broken: It’s just a hell of a lot harder than we give it credit for.”

      He discusses the concept of false correlations, emphasizing the need to be careful when analyzing data about phenomena that appear to be related on the surface. In a large amount of data, it is easy to find things that appear to be related, but in fact these relationships may be accidental or caused by other factors. The authors use the example of candlelight commentary and COVID to illustrate the possibility of false correlations due to seasonal factors, highlighting the need for a multifaceted, in-depth analysis of the data before drawing conclusions. This paragraph is a good illustration of a common pitfall in data analysis, which is that causal conclusions should not be drawn based on superficial correlations alone. This is very helpful for understanding the complexity and multi-dimensionality of data analysis.

    2. One thing to note in the above case of candle reviews and COVID is that just because something appears to be correlated, doesn’t mean that it is connected in the way it looks like. In the above, the correlation might be due mostly to people buying and reviewing candles in the fall, and diseases, like COVID, spreading most during the fall. It turns out that if you look at a lot of data, it is easy to discover spurious correlations where two things look like they are related, but actually aren’t. Instead, the appearance of being related may be due to chance or some other cause. For example: Fig. 8.3 An example spurious correlation from Tyler Vigen’s collection of Spurious Correlations# By looking at enough data in enough different ways, you can find evidence for pretty much any conclusion you want. This is because sometimes different pieces of data line up coincidentally (coincidences happen), and if you try enough combinations, you can find the coincidence that lines up with your conclusion. If you want to explore the difficulty of inferring trends from data, the website fivethirtyeight.com has an interactive feature called “Hack Your Way To Scientific Glory” where, by changing how you measure the US economy and how you measure what political party is in power in the US, you can “prove” that either Democrats or Republicans are better for the economy. Fivethirtyeight has a longer article on this called “Science Isn’t Broken: It’s just a hell of a lot harder than we give it credit for.”

      He discusses the concept of false correlations, emphasizing the need to be careful when analyzing data about phenomena that appear to be related on the surface. In a large amount of data, it is easy to find things that appear to be related, but in fact these relationships may be accidental or caused by other factors. The authors use the example of candlelight commentary and COVID to illustrate the possibility of false correlations due to seasonal factors, highlighting the need for a multifaceted, in-depth analysis of the data before drawing conclusions. This paragraph is a good illustration of a common pitfall in data analysis, which is that causal conclusions should not be drawn based on superficial correlations alone. This is very helpful for understanding the complexity and multi-dimensionality of data analysis.

    1. We can trace Internet trolling to early social media in the 1980s and 1990s, particularly in early online message boards and in early online video games. In the early Internet message boards that were centered around different subjects, experienced users would “troll for newbies” by posting naive questions that all the experienced users were already familiar with. The “newbies” who didn’t realize this was a troll would try to engage and answer, and experienced users would feel superior and more part of the group knowing they didn’t fall for the troll like the “newbies” did. These message boards are where the word “troll” with this meaning comes from. One set of the early Internet-based video games were Multi-User Dungeons (MUDs), where you were given a text description of where you were and could say where to go (North, South, East, West) and text would tell you where you were next. In these games, you would come across other players and could type messages or commands to attack them. These were the precursors to more modern Massively multiplayer online role-playing games (MMORPGS). In these MUDs, players developed activities that we now consider trolling, such as “Griefing” where one player intentionally causes another player “grief” or distress (such as a powerful player finding a weak player and repeatedly killing the weak player the instant they respawn), and “Flaming” where a player intentionally starts a hostile or offensive conversation. In the 2000s, trolling went from an activity done in some communities to the creation of communities that centered around trolling such as 4chan (2003), Encyclopedia Dramatica (2004), and some forums on Reddit (2005).

      This description of the origins and development of Internet "spoofing" is very accurate and insightful. From the early days of social media and online gaming in the 80s and 90s, we can see early forms of parody behavior, especially in online message boards and multi-user dungeon (MUDs) games. These early interactions not only show the origins of trolling, but also provide important historical context for our understanding of today's online culture.

    2. We can trace Internet trolling to early social media in the 1980s and 1990s, particularly in early online message boards and in early online video games. In the early Internet message boards that were centered around different subjects, experienced users would “troll for newbies” by posting naive questions that all the experienced users were already familiar with. The “newbies” who didn’t realize this was a troll would try to engage and answer, and experienced users would feel superior and more part of the group knowing they didn’t fall for the troll like the “newbies” did. These message boards are where the word “troll” with this meaning comes from. One set of the early Internet-based video games were Multi-User Dungeons (MUDs), where you were given a text description of where you were and could say where to go (North, South, East, West) and text would tell you where you were next. In these games, you would come across other players and could type messages or commands to attack them. These were the precursors to more modern Massively multiplayer online role-playing games (MMORPGS). In these MUDs, players developed activities that we now consider trolling, such as “Griefing” where one player intentionally causes another player “grief” or distress (such as a powerful player finding a weak player and repeatedly killing the weak player the instant they respawn), and “Flaming” where a player intentionally starts a hostile or offensive conversation.

      This description of the origins and development of Internet "spoofing" is very accurate and insightful. From the early days of social media and online gaming in the 80s and 90s, we can see early forms of parody behavior, especially in online message boards and multi-user dungeon (MUDs) games. These early interactions not only show the origins of trolling, but also provide important historical context for our understanding of today's online culture.

    1. A knock-off designer item does not offer the purchaser the same sort of connection to the designer brand that an authentic item does. Authenticity in connection requires honesty about who we are and what we’re doing; it also requires that there be some sort of reality to the connection that is supposedly being made between parties. Authentic connections frequently place high value on a sense of proximity and intimacy. Someone who pretends to be your friend, but does not spend time with you (proximity) or does not open themselves up to trusting mutual interdependence (intimacy) is offering one kind of connection (being an acquaintance) under the guise of a different kind of connection (friendship). This is not to say that there is no room for appreciating connections that are not fully honest, transparent, and earnest all the time. Social media spaces have allowed humor and playfulness to flourish, and sometimes humor and play are not, strictly speaking, honest.

      This piece talks about how being real and honest is important in friendships and when choosing things like designer items. Just like a real designer brand is special because of its quality, true friendships are special because of trust and real connection. But the writer also says that not everything needs to be super serious or honest all the time. Like on social media, sometimes it's okay to just have fun and not be too serious, just like some people like fake designer stuff even though it's not real. It's all about balancing real connections with having fun.

    2. These reactions make sense. Try to imagine the early days of human social life, before we started attaching our welfare to the land in terms of planting crops and building structures designed for permanence. Our nomadic forebears functioned in groups who coordinated in highly specialized ways to ensure the survival of the whole. Although such communities are often pictured as being prehistoric, primitive, and obsolete, we now know that such societies were and are highly sophisticated, often developing and depending on highly specified legal codes, some of which are still in use today in Bedouin communities in North Africa. Other nomadic groups, such as Roma people (which you may have heard derogatorily called ‘gypsies’), live within and around land-based nations and their various borders and laws. To ensure the survival of their ethnicity, cultures, and languages, they depend on being able to trust each other. The nations whose land we are living and studying on here also knew the importance of being able to know who can be trusted.

      Modern societies like the USA and Europe rely on systems and rules so people don't need to trust each other for everyday things. You can buy food or get help without knowing the person. This leads to more focus on individual goals. But, traditional nomadic groups like the Bedouins in North Africa and the Roma people depend a lot on trust and working together. Everyone in their group has a role and they rely on each other. This shows a big difference in how modern and traditional societies work and what they value.

    1. Later, sometime after the printing press, Stondage highlights how there was an unusual period in American history that roughly took up the 1900s where, in America, news sources were centralized in certain newspapers and then the big 3 TV networks. In this period of time, these sources were roughly in agreement and broadcast news out to the country, making a more unified, consistent news environment (though, of course, we can point out how they were biased in ways like being almost exclusively white men). Before this centralization of media in the 1900s, newspapers and pamphlets were full of rumors and conspiracy theories. And now as the internet and social media have taken off in the early 2000s, we are again in a world full of rumors and conspiracy theories.

      Overall, this text effectively demonstrates the changes in the way information is disseminated and the impact of these changes on the way the public receives information. Each era has its unique challenges, from the unbridled dissemination of information in the early newspaper era to the centralised reporting of the 20th century to the limitless proliferation of the Internet and social media today, these changes reflect the ongoing struggle between the free flow of information and the assurance of journalistic accuracy and reliability.

    1. In the mid-1990s, some internet users started manually adding regular updates to the top of their personal websites (leaving the old posts below), using their sites as an online diary, or a (web) log of their thoughts. In 1998/1999, several web platforms were launched to make it easy for people to make and run blogs (e.g., LiveJournal and Blogger.com). With these blog hosting sites, it was much simpler to type up and publish a new blog entry, and others visiting your blog could subscribe to get updates whenever you posted a new post, and they could leave a comment on any of the posts.

      This paragraph provides a detailed overview of the evolution of blogging from the mid-1990s to 1999, highlighting the significant impact that the emergence of professional blogging platforms had on blogging culture. Initially, the updating and maintenance of blogs required a certain amount of technical knowledge, as bloggers needed to manually add new content to their personal websites. However, with the advent of platforms such as LiveJournal and Blogger.com, it has become easier to create and manage blogs, allowing more ordinary users to participate

    1. his means that media, which includes painting, movies, books, speech, songs, dance, etc., all communicates in some way, and thus are social. And every social thing humans do is done through various mediums. So, for example, a war is enacted through the mediums of speech (e.g., threats, treaties, battle plans), coordinated movements, clothing (uniforms), and, of course, the mediums of weapons and violence

      This statement highlights the multifaceted nature of media as a social communication tool. This understanding emphasizes the role of media as more than just entertainment or information, but as fundamental tools that shape and reflect human interactions and social structures. It highlights the importance of considering the ways in which we communicate through these different mediums and the meaning of the content.