34 Matching Annotations
  1. Mar 2026
    1. As a potential worker in the tech industry, you might someday find yourself in a position where you have influence over how social media platforms are designed, programmed, or operated (e.g., you could be a programmer, or designer, or content moderator). We hope that if you find yourself in one of these positions, you consider the ethics of what you are doing. We hope you could then bring those concerns into how you design and implement automated systems for social media sites.

      As a student majoring in interaction design and informatics, I believe that I have a much deeper understanding of social media, its innerworkings, and the users on said platforms. This class will have prepared me for my future career and I'm glad that I had the opportunity to gain all these amazing insights. I'm sure that what I learned here will not be forgotten and instead expressed in the work I do.

    1. Fig. 21.1 The start of an xkcd comic compiling a hundred years of complaints about how technology has speed up the pace of life. (full transcript of comic available at explainxkcd)

      I think the quote, "We fire off a multitude of rapid and short notes, instead of sitting down to have a good talk over a real sheet of paper" is the perfect encapsulation of the evolution and result of print. When everyone has the access to create and share media at a rapid pace, the quality and meaning of their words is less impactful over time. In a time where words don't cost a penny to share, people are less intentional with what they upload. If we were limited to sharing only the most important of thoughts, we would write what was the most important and meaningful to us.

    1. Another change was that as computers became small enough for people to buy them for their homes, they became seen as toys for boys and not girls. The same transition is seen in video game consoles from being for the whole family to being for boys only (e.g., the Nintendo Game Boy). In the end, computer programming became profitable and male-dominated.

      As a girl, I had always noticed this. I would oftentimes hang out with my male cousins, who would love to play video games, and when I wanted the chance to play my grandmother would retort with claims that video games werent for girls, and that I should be playing with dolls instead. Even when I was older and I had the ability to access the internet on my own, the games online for girls were always catered towards beauty or princesses, there was never anything of substance made for girls as there were for boys. Video games have had a long history of doing so, as when they were first being developed, many game developers would struggle to create games catered for girls, unlike the shooter and fighting games meant for boys. Thankfully, over the years game developers have truly begun to understand what types of games women really want, and are continuing to build and innovate each year.

    1. Surveillance capitalism began when internet companies started tracking user behavior data to make their sites more personally tailored to users. These companies realized that this data was something that they could profit from, so they began to collect more data than strictly necessary (“behavioral surplus”) and see what more they could predict about users.

      Although I can see how this would benefit the companies that are partaking in surveillance capitalism, I don't believe that this should be allowed online. Users should be able to control whether or not they receive ads that are targeted to them based on the information that the website is taking from them. Automatically taking their personal information and data and using it to display ads is an invasion of privacy that the user has no control of in most cases.

    1. In what was an unusual turn of events for a Twitter “main character of the day,” Jeremy Schneider later made an apology that was mostly accepted by the Twitter users who had criticized his Tweet:

      Honestly, I think this tweet thread was incredibly refreshing. It's pretty rare nowadays to witness people honestly owning up to their mistakes, reflecting on how what they did was wrong, and apologizing for their actions. Users are often quick to get defensive online, as they sometimes see criticism from others as an opportunity to garner more interactions from users to bring more attention to their post or profile. Rage-baiting has become so popular these days, that seeing people being sincere in their apologies has become quite a rarity. If I were one of the users who got upset over his original tweet. I would've been quick to forgive and forget after seeing his thread afterwards.

    1. ‘It’s on social media, so it’s public!’ one could argue as a case for people’s right to act like forensic analysts on social media, and that is true. But this justification is typically valid when a) the person posting is someone of note, like a celebrity or a politician, and b) when the stakes are even a little bit high. In most cases of normal-person canceling, neither standard is met. Instead, it’s mob justice and vigilante detective work typically reserved for, say, unmasking the Zodiac killer, except weaponized against normal people. […] Platforms like TikTok, where even people with few or no followers often go viral overnight, expedite the shaming process.

      Social media is often seen as a "free range" for users to harass other users, with the idea that because all the information they found on the person is online, there's nothing objectively bad about publicising that information. There is also the matter of users often being behind anonymous accounts, where they're able to share this information without fear of retaliation. When users feel that someone has done something wrong, they often take things way too far. I've witnessed a situation where a woman was at a baseball game and a random person in the background made a disgusted face at the woman, and many of the users presumed that she did this with racist implications, as the person who was filming was a person of color. Users online then began to find the stranger's information- their house, job, linkedin, etc. They began to write fraudulent bad reviews under the stranger's job's review section, and eventually got the woman fired. This is just one of few cases of people on the internet taking things absolutely way too far, and causing greater harm to the stranger and the stranger's family, when people weren't even entirely sure if what she did was actially racist, or even deserved that type of behavior.**

    1. The Ku Klux Klan (KKK) is an American white-supremacist terrorist organization known to harass and murder Black people and others. Members of the KKK keep their identity secret by wearing white robes and hoods over their faces. Often influential and powerful members of society were part of the KKK, such as police officers and government officials. In the 1920s, a magazine colled Tolerance published lists of members of the KKK and their addresses, what we would now call “doxing.” They hoped to end the hateful and violent KKK organization.

      Honestly, I believe that these groups built on the backs of hate, racism, bigotry, and harassment do not deserve the same respect that normal people should get. These groups have proven that they do not care for the well-being of other people and instead intend to hurt them. I believe that doxxing people in these groups is not even close to the amount of damage they've caused (even by just existing within those groups). Releasing their personal information would usually be seen as a bad thing by me, but I honestly could not care less if these types of people experienced the consequences of their actions.

    1. In addition, fake crowds (e.g., bots or people paid to post) can participate in crowd harassment. For example:

      I recently came across a Youtuber (who was previously cancelled and recently returned to uplopading videos online) who was posting hate comments underneath the videos of other content creators who were making videos of him and the reasons why he got cancelled. He felt threatened that these people were trying to stop him from coming back to Youtube, so he bought thousands of bot accounts to comment and harrass these Youtubers in their comment sections. It's pretty interesting the lengths some people will go to get back into the spotlight.

  2. Feb 2026
    1. When looking at who contributes in crowdsourcing systems, or with social media in generally, we almost always find that we can split the users into a small group of power users who do the majority of the contributions, and a very large group of lurkers who contribute little to nothing. For example, Nearly All of Wikipedia Is Written By Just 1 Percent of Its Editors, and on StackOverflow “A 2013 study has found that 75% of users only ask one question, 65% only answer one question, and only 8% of users answer more than 5 questions..” We see the same phenomenon on Twitter:

      I genuinely have never thought about this, especially considering how I'm a "lurker" on almost all social media. Whenever I decide to post something to my account it's instantly archived and then later reuploaded, so as not to gain any attention from my followers on my account. I like to post just to have a nice-looking feed, and posting this way lets me focus more on the overall aesthetic of my feed, rather than the engagement and interactions I get from my followers.

    1. When social media users work together, we can consider what problem they are solving. For example, for some of the Tiktok Duet videos from the virality chapter, the “problem” would be something like “how do we create music out of this source video” and the different musicians contribute their own piece to the solution. For some other examples:

      As someone who's been on the internet for a while, you become quite desensitized to the number of times strangers on the internet will come together to attempt to solve some sort of mystery- oftentimes going to incredible lengths to get the answers they're seeking. I've even seen online groups of people that are dedicated to solving cold missing person cases. However, one instance I can think of of social media users working together to gather information is when many "Love is Blind" (a blind dating Netflix show) fans were suspicious of a man and his life history, as his life story seemed quite fabricated and out-of-the-ordinary. So, as TikTok users do, they dug online until they found the information they were seeking. One user found the obituary of his grandfather and managed to get the information they needed from it. I personally think this is quite an invasion of privacy, seeing as they went through many of his personal accounts and family members to be able to get this information, but on the other hand, this information was public, so it's pretty morally gray.

    1. When social media companies like Facebook hire moderators, they often hire teams in countries where they can pay workers less. The moderators then are given sets of content to moderate and have to make quick decisions about each item before looking at the next one. They have to get through many posts during their time, and given the nature of the content (e.g., hateful content, child porn, videos of murder, etc.), this can be traumatizing for the moderators:

      I had never truly considered how moderators of social media platforms must feel- and its incredibly sad to know that their job results in them having trauma. I would feel like that's an incredibly difficult thing to have to deal with, especially when it's your job. I hope that these platforms switch to automated bots to moderate this type of content, I dont feel a person should be subjected to trauma, just because it's their job.

    1. Social media sites also might run into legal concerns with allowing some content to be left up on their sites, such as copyrighted material (like movie clips) or child pornography. So most social media sites will often have rules about content moderation, and at least put on the appearance of trying to stop illegal content (though a few will try to move to countries that won’t get them in trouble, like 8kun is getting hosted in Russia). With copyrighted content, the platform YouTube is very aggressive in allowing movie studios to get videos taken down, so many content creators on YouTube have had their videos taken down erroneously.

      I think its really interesting to see just how seriously different platforms consider copyright content being posted on their websites. YouTube is well-known for giving strikes to creators who play a 10-second clip of copyrighted music, even resorting to banning creators from posting content. Meanwhile, TikTok is much more lenient, so much so that users make frequent comments about watching full movies on TikTok through clips that are posted online. I wonder why TikTok is much more lenient on their copyright policies compared to TikTok, however, I'm sure a lot of it has to do with the fact that most YouTubers create monetized videos, while TikTok is mostly a platform where monetization isn't that popular, unless creators are sponsored to advertise a product.

    1. While there are healthy ways of sharing difficult emotions and experiences (see the next section), when these difficult emotions and experiences are thrown at unsuspecting and unwilling audiences, that is called trauma dumping. Social media can make trauma dumping easier. For example, with parasocial relationships, you might feel like the celebrity is your friend who wants to hear your trauma. And with context collapse, where audiences are combined, how would you share your trauma with an appropriate audience and not an inappropriate one (e.g., if you re-post something and talk about how it reminds you of your trauma, are you dumping it on the original poster?).

      I have experienced being on the viewer-side of users trauma dumping- whether it be in video formats, posted publicly for everyone to see and interact with, or under a comment section, where no one asked to read about their traumatic past. In many ways I can see how this may be therapeutic or even comforting, knowing that there are people out there to listen to and talk about your past with, however there are also cons to doing so. Most users when scrolling on social media somewhat expect there to be negative, sad, or frustrating stories that they will come across, however a lot of users may not feel comfortable being told personal stories and traumatic events that have happened to other people, as it could be triggering to their own past, or make them skeptical, fearing, or deeply uncomfortable. There have been various times where under a persons post where they're sharing their health journey from an eating disorder I will see users in the comments talking about all the ways in which they endulged in their unhealthy habits, and that could not only make users and the video poster uncomfortable, but even push them into reforming their bad habits. There's so many potential consequences sharing one's traumatic story online can bring, I find it would be much more efficient and beneficial to all users if these people would instead speak to a therapist or close friends (who are comfortable speaking about this with), rather than strangers on the internet who never asked to read or listen to your story. **

    1. Some researchers have found that people using social media may enter a dissociation state, where they lose track of time (like what happens when someone is reading a good book).

      Through various research methods in the past, I have found that this sort of "dissociation state" usually occurs because when people are using social media, they are mindlessly scrolling until they find their next dose of short-term dopamine. It's what also keeps users on the platform. A constant need and search for dopamine from a funny video or a cute dog keeps the user scrolling, until the hours are passing and it's suddenly 4am.

    1. Though even modifying a recommendation algorithm has limits in what it can do, as social groups and human behavior may be able to overcome the recommendation algorithms influence.

      I have personally experienced this especially on Twitter or Threads. These two apps prioritize engagement, doesn't matter what type it may be- positive, negative, neutral, the algorithm will take it as a sign that you'd like to see more of that content, and suggest posts that are similar to the one that you interacted with. Tiktok I think is a bit smarter with how they've designed their algorithm, as whenever I leave a negative comment on a negative post that I dont agree with, it doesn't show me more of that content, but rather people also hating on that negative content. It's a lot smarter in the ways that it shows posts to it's users, it's able to read when a user has a positive or negative response and is able to adjust accordingly.

    1. Now, how these algorithms precisely work is hard to know, because social media sites keep these algorithms secret, probably for multiple reasons: They don’t want another social media site copying their hard work in coming up with an algorithm They don’t want users to see the algorithm and then be able to complain about specific details They don’t want malicious users to see the algorithm and figure out how to best make their content go viral

      Out of the large variety of social media apps, I believe Tiktok is the most renowned for its algorithm and the methods in which it recommends videos to users. Just the other day I was in a situation where I wanted to watch videos on silent, and the tiktok I came across was captioned something relating to the audio that was playing, but because I couldn't hear it I went to the comments for some clues. To my surprise I was met with other users who were in the same situation as me- commenting things such as: "So we're all watching Tiktok on mute rn?" or "I'm never on mute, how does tiktok know that everyone whos seeing this video is watching it on mute???" I found it pretty crazy and I was kind of freaked out because it took such a niche scenario for me to be in and put a video on my page that perfectly matched what me and hundreds of thousands of other people were doing. It wasn't like I had just said some words around my phone and the app decided to show me a video related to those words, this was a situation where I didn't think my phone would have any idea that I was on silent. ****

    1. Additionally, attempts to make disabled people (or people with other differences) act “normal” can be abusive, such as Applied Behavior Analysis (ABA) therapy for autistic people, or “Gay Conversion Therapy.”

      I think it's interesting to see what is and what isn't depicted as a disability. As someone who doesn't believe that being a part of the LGBT community makes you "disabled"- that it's actually quite normal, seeing groups of people form Gay Conversion Therapy, it makes me incredibly sad. It's putting people into a position where they feel that there is something inherently wrong with them, and that it deserves changing. Its like putting people who use glasses into a "Conversion Therapy" where they're told to simply "change who they are and what their disability is". It's not as simple as changing your clothes, it's something that's a part of you and that makes you, you. It's interesting to see the lengths people will go to change someone who's "different" or not a part of "the norm".**

    1. If an airplane seat was designed with little leg room, assuming people’s legs wouldn’t be too long, then someone who is very tall, or who has difficulty bending their legs would have a disability in that situation.

      Although I wouldn't consider this particular example as a "disability" I think it definitely affects a large group of people who don't fit into the "average height" category. As someone who's 5'5", I've never had to deal with this issue, as airplane seats are usually quite comfy for me, but when I met my boyfriend I found out relatively quickly that booking flights is a pretty big hassle for this reason. Going on a flight for 3 hours or less could be bearable, but when traveling around the world on a 15 hour flight, that's a nightmare for anyone that is taller than average. You have to either pay extra to choose a seat by the emergency exit, or have your legs in an awkward position in the aisle for the entirety of the flight. It's easy to not consider the minority of a population when designing something for consumers, but it is essential to keep the minority in mind, especially as sometimes their lives could be at risk if someone were to be designing a device or technology that was made for health related reasons

    1. Hackers finding a vulnerability and inserting, modifying, or downloading information. For example:

      When reading this section I'm reminded of a pretty infamous case that happened not long ago. The "Tea App" was used among young girls all across North America, and it was known for being an app where young women would go online and post about men they've been with and their bad, weird, or good attributes- essentially "spilling the tea". It was marketed as a "safe space" for women, where they could post anonomously and communicate potential catfishes, offenders, or overall bad men. However in 2023 a hacker had managed to leak all of the users' information, including but not limited to- credit card information, addresses, and 13,000 government IDs. This happened because the Tea app hadn't properly encrypted or protected the data, allowing the hacker to virtually access alll the users' information.**

    2. While we have our concerns about the privacy of our information, we often share it with social media platforms under the understanding that they will hold that information securely. But social media companies often fail at keeping our information secure.

      I find the concept of securty and privacy on social media incredibly intriguing, as users are almost always promised that just by having a username and password, your data is completely protected, or at least we are given that assumption. However, it's relatively easy to hack into anyone's account if you have the right knowledge and know where to look. Especially if you have the capabality to manage an app or website from the back end, having the ability to go through data within an application can lead to data leaks, and private information you thought no one would have can suddenly be given to the world.

    1. Datasets can be poisoned unintentionally. For example, many scientists posted online surveys that people can get paid to take. Getting useful results depended on a wide range of people taking them. But when one TikToker’s video about taking them went viral, the surveys got filled out with mostly one narrow demographic, preventing many of the datasets from being used as intended. See more in

      I had previously known that intentionally poisoning datasets was possible, and that unintentionally doing so was also possible, but I wasn't aware that something like a TikTok video was able to make that big of an impact. It's interesting to see how one influencer's video and her audience were able to make such a big impact and effectively ruin most of the researcher's data- considering her audience is mainly women in their 20s.

    1. Social media sites then make their money by selling targeted advertising, meaning selling ads to specific groups of people with specific interests. So, for example, if you are selling spider stuffed animal toys, most people might not be interested, but if you could find the people who want those toys and only show your ads to them, your advertising campaign might be successful, and those users might be happy to find out about your stuffed animal toys. But targeting advertising can be used in less ethical ways, such as targeting gambling ads at children, or at users who are addicted to gambling, or the 2016 Trump campaign ‘target[ing] 3.5m black Americans to deter them from voting’
      1. As someone who's on social media pretty often, I can 100% attest to the fact that any sort of interaction with an ad or sponsored post will cause your timeline to be filled with other ads relating to a similar product, if not the same one. Sometimes ads do get my attention and I look at the item to see the price, and when I return to my homepage I manage to see the same post 3 or 4 times in the same hour. I also do recognize that gambling advertisements are often shown to children, which I think should 100% monitored, considering children are both not able to grasp the concept of money at their age, and they're children, the idea that it's legal to show these ads to children when they're using electronics is beyond me.****
  3. Jan 2026
    1. Have you witnessed different responses to trolling? What happened in those cases? What do you think is the best way to deal with trolling?

      I actually have seen different responses to trolling that have worked. I saw a video a woman made on Tiktok recently where she had received comments from a user on her videos harrassing her, and when she had realized that he lived around 3 hours from her, she went to his place of work and recorded herself confronting him. When she confronted him she told him if he didn't apologize she'd tell his wife about the Grindr account he had. He then ended up apologizing and saying he wouldn't do it again. One of my favorite quotes from her was "You dont know me, I dont know you, but I was the person you left that comment under, but I just wanted you to know, you see how easy i found you?". An absolutely deserved consequence for his actions. **

    1. In the Black Lives Matters protests of 2020, Dallas Police made an app where they asked people to upload videos of protesters doing anything illegal. In support of the protesters, K-pop fans swarmed the app and uploaded as many K-pop videos as they could eventually leading to the app crashing and becoming unusable, and thus protecting the protesters from this attempt at Police surveillance.

      I find it incredibly interesting that such a nice part of history like K-pop fans trolling in protest of the police by crashing the app- it's such a small gesture, but was incredibly impactful. Their voices were heard and seen. I was also there to witness when Tiktok users had come together to reserve tickets to a Trump rally and not show up, in an effort to ensure no one would show up to the event. Watching how powerful the internet can be when users come together is insane but an important lesson.

    1. The way we present ourselves to others around us (our behavior, social role, etc.) is called our public persona. We also may change how we behave and speak depending on the situation or who we are around, which is called code-switching.

      As a person of color, I find myself code-switching relatively often. The culture I've been surrounded by growing up is incredibly different to others', and when I'm put in situations where I'm not talking with people who share a similar culture I tend to bottle-up and switch to a different version of myself. A version of myself that's more approachable and respectful, a bit more timid. I don't intentionally do so most of the time, it's sort of just turned into a habit for me, as I'm sure it has for other people of color.

    1. As a rule, humans do not like to be duped. We like to know which kinds of signals to trust, and which to distrust. Being lulled into trusting a signal only to then have it revealed that the signal was untrustworthy is a shock to the system, unnerving and upsetting. People get angry when they find they have been duped. These reactions are even more heightened when we find we have been duped simply for someone else’s amusement at having done so.

      Although this incident happened years ago, this is still a repeating pattern we see in social media today. Oftentimes when I come across videos on platforms such as TikTok or Instagram, people in the comments would be debating whether or not something was fabricated as "rage-bait". Rage-bait as a term is relatively new, but the concept is as old as time- whether it be used in newspapers, story-telling, movies, music, etc, rage-bait is an effective method to garner criticism, but more importantly- engagement. The more people talk and critique a work, the more it will rise in popularity.

    1. One famous example of reducing friction was the invention of infinite scroll. When trying to view results from a search, or look through social media posts, you could only view a few at a time, and to see more you had to press a button to see the next “page” of results. This is how both Google search and Amazon search work at the time this is written. In 2006, Aza Raskin invented infinite scroll, where you can scroll to the bottom of the current results, and new results will get automatically filled in below. Most social media sites now use this, so you can then scroll forever and never hit an obstacle or friction as you endlessly look at social media posts. Aza Raskin regrets what infinite scroll has done to make it harder for users to break away from looking at social media sites.

      From the perspective of the social media companies, I can see why they'd add the infinite scroll to their apps. It keeps the users from leaving the app and allows them to engage with more content- watch more ads, etc. But as a user I find the infinite scroll to be incredibly harmful, especially to children and mentally ill people. When you're stuck in a scolling-trance, it can be hard to stop, and before you know it you've spent the entirety of your day scrolling on TikTok. One can become addicted to their phone, and although the health affects social media has done to people isn't that well studied- it's easy to tell that long-term use of one's phone can negatively impact their health.

    2. Sometimes designers add friction to sites intentionally. For example, ads in mobile games make the “x” you need to press incredibly small and hard to press to make it harder to leave their ad:

      As a design major I've encountered so many app interfaces that intentionally guide the user to a place on their app just to influence them to use a new feature or to click on an ad. One of the biggest ones I can think of is Spotify- and how they recently moved the tab for 'My Library' and added a 'Create' tab in its place. Replacing icons that they know users often visit (so much so its like muscle memory) tricks the user into clicking on a feature they didn't mean to. Instagram is also notorious for doing this.

    1. Dates turn out to be one of the trickier data types to work with in practice. One of the main reasons for this is that what time or day it depends on what time zone you are in. So, for example, when Twitter tells me that the tweet was posted on Feb 10, 2020, does it mean Feb 10 for me? Or for the person who posted it? Those might not be the same. Or if I want to see for a given account, how much they tweeted “yesterday,” what do I mean by “yesterday?” We might be in different time zones and have different start and end times for what we each call “yesterday.”

      I notice this sort of glitch sometimes when I'm using the app BeReal. When the notification goes off for everyone at the same time (no matter what time zone you are), the way the content is displayed to you is based entirely on what country you're currently in. Example- I have a friend who was visiting South Korea, and when the notification went off for us to take a photo through the app, it indicated that hers was 17 hours late. Interesting how there's a lack of solutions when technology has advanced so fast.

    2. In addition to the main components of the images, sound, and video data, this information is often stored with metadata, such as: The time the image/sound/video was created The location where the image/sound/video was taken The type of camera or recording device used to create the image/sound/video etc.

      I find it so intriguing that, by simply posting a photo or tweet, a platform can gather immense amounts of data from the user. This type of data (metadata) is typically accessible to those who know their way around a computer, and one can assume how dangerous it can be when given to the wrong people.

    1. Copy to clipboard If you run the code above you will see that the program pauses as it displays the output above. These pauses may come in handy when posting tweets, to make it look like your bot is taking time to type in the text. You will get a chance to try that in the next practice section.

      I always wondered how programmers would create these sorts of commands, and it's cool to know that it's done with simple commands like these! I was also not previously aware that to display something on a screen, you have to use the command 'display'. I previously thought that 'print' was the main form to do so.

    1. We also would like to point out that there are fake bots as well, that is real people pretending their work is the result of a Bot. For example, TikTok user Curt Skelton posted a video claiming that he was actually an AI-generated / deepfake character:

      As someone who's majoring in a creative field, I find it both incredibly interesting and concerning just how advanced AI is getting, and where this rapid innovation will take us in just a few years. It's so jarring to be watching a video on Tiktok or Instagram and fully believe it to be completely real, just to feel the need to dissect the video to see if it's really real. I can't begin to imagine how the job industry will change due to AI, but with innovation there (hopefully) comes opportunity.**

    1. Something is right or wrong because God(s) said so. Euthyphro Dilemma: “Is the pious [action] loved by the gods because it is pious, or is it pious because it is loved by the gods?” (Socrates, 400s BCE Greece) If the gods love an action because it is morally good, then it is good because it follows some other ethics framework. If we can figure out which ethics framework the gods are using, then we can just apply that one ourselves without the gods. If, on the other hand, an action is morally good because it is loved by the gods, then it doesn’t matter whether it makes sense under any ethics framework, and it is pointless to use ethics frameworks.1

      As someone who grew up in a religious household, I often asked questions challenging this theory. It's interesting to think about what could be reprimanded or praised by your god(s)/religious circle, as long as it was written into the guidelines in a scripture or reading. Additionally, I do think that this thinking is dangerous, as it opens up the possibility for people within the religion to misinterpret or maliciously translate certain texts to push negative propaganda to a group of people, and the possibility of mistranslation is incredibly high as most of these texts were written hundreds of years ago.

    1. We also see this phrase used to say that things seen on social media are not authentic, but are manipulated, such as people only posting their good news and not bad news, or people using photo manipulation software to change how they look

      I think this is an interesting concept to think about, as we are usually conditioned to think that the internet "isn't real", that most things online are fabricated, exaggerated, etc. However, I do think that just because this is common online, it's not to say that "real life" is a place where everyone is completely authentic and themselves, as some people may feel that they only want to share the good parts of their lives with their friends or family, while keeping anything that wouldn't be considered "good" to themselves, and vice versa. I do think it's hasty to say that all that we see on social media "is not real", as there are plenty of real people behind each account, but we must consider that because people are able to be behind potentially anonymous accounts, it is much easier to fabricate stories or life experiences, or to center one's entire online presence around a portion of their life they want the internet to see, essentially artificially creating an online persona that is not reflective of who they are in real life.