15 Matching Annotations
  1. Last 7 days
    1. The spread of these letters meant that people were putting in effort to spread them (presumably believing making copies would make them rich or help them avoid bad luck). To make copies, people had to manually write or type up their own copies of the letters (or later with photocopiers, find a machine and pay to make copies). Then they had to pay for envelopes and stamps to send it in the mail. As these letters spread we could consider what factors made some chain letters (and modified versions) spread more than others, and how the letters got modified as they spread.

      This is actually really funny because this is exactly what was happening when I was in middle school. There were always pictures that had some sort of "repost this for good luck" or "share this with someone else or you'll get bad luck." I think this is still pretty prominent on platforms that mostly host older users because I still see my aunts and uncles reposting things like that on Facebook.

    1. Now, try to find the accessibility settings on the social media site and on your device. For each setting you see, try to come up with what disabilities that setting would be beneficial for (there may be multiple).

      I looked at Facebook for this activity and funnily enough, Facebook doesn't really have a lot of options for accessibility other than display options (which are a whole option of 2). One includes dark mode, which I'm assuming is for those with eye sensitivities and for those in darker settings. The other option is for notification timing which automates when notifications are dismissed, which I have no real idea of who this might be actually useful for other than personal preference.

  2. Feb 2026
    1. Do you think there is information that could be discovered through data mining that social media companies shouldn’t seek out (e.g., social media companies could use it for bad purposes, or they might get hacked and others could find it)?

      The first thing that came to mind would be the collection of specific location (such as I.P. addresses and specific addresses where the device is being used). While this may help in some cases (there's a case I know of where police found the location of an online bully for severe bullying and threats through pinging their IP address and pinpointing a specific house address.), there are cases currently where specific location pinging is used by ICE to track people down using their devices. It's something that can be used or abused.

    1. What was accurate, inaccurate, or surprising about your ad profile? How comfortable are you with Google knowing (whether correctly or not) those things about you?

      I was unable to see what Google thinks of me because apparently I have my personalized ads turned off alongside any data collection option other than my youtube activity. Despite that, I still feel like I get quite a few ads that are aligned with my interests, which make me question whether or not enabling personalized ads and data collection really matters for whether or not your data is being collected (I'm sure it doesn't).

  3. Jan 2026
    1. What do you think is the best way to deal with trolling?

      I think this is a difficult question to answer purely because every person (or every troller) has a different motive and different reactions motivate them more. While ignoring trolls can be useful in some cases, it won't work on every case (or every troller) as explained by Film Critics Hulk. It can lead to dangerous levels of behavior like stalking, threatening, confronting, etc. On the other hand, engaging could still give them power, but it might take it away too. For example, if a rumor is being spread just for the fun of it, a simple shut down like "Not true" without further engagement can be effective. Privating your account can also be an option of protecting yourself from outside engagement, but it also fuels trollers into thinking they've won, which will ultimately encourage them to continue (especially if the intention of their trolling was to kick someone off their platform or off the internet). Kicking trolls off the platform can also be effective, but it won't stop all of them because they can always find ways to create another account, that's why stalking can be hard to track or avoid. Overall, it's a tough situation and depends on the situation.

    1. In the early Internet message boards that were centered around different subjects, experienced users would “troll for newbies” by posting naive questions that all the experienced users were already familiar with. The “newbies” who didn’t realize this was a troll would try to engage and answer, and experienced users would feel superior and more part of the group knowing they didn’t fall for the troll like the “newbies” did. These message boards are where the word “troll” with this meaning comes from.

      I think this still occurs in settings like video games. For example, tricking a new player into doing something that every experienced player knows doesn't work or negatively impacts the new player in some way. I see this most often when streamers are trying out a new game (typically well-known and the streamer is just late to playing it, like minecraft), where they suggest things they know are trolls. Something that comes to mind would be like playing a bed in the nether for Minecraft.

    1. 5.5.3. 8Chan (now 8Kun)# 8Chan (now called 8Kun) is an image-sharing bulletin board site that was started in 2013. It has been host to white-supremacist, neo-nazi and other hate content. 8Chan has had trouble finding companies to host its servers and internet registration due to the presence of child sexual abuse material (CSAM), and for being the place where various mass shooters spread their hateful manifestos. 8Chan is also the source and home of the false conspiracy theory QAnon

      Although it wasn't explicitly explained, the fact that 4chan was probably created because some 15 year old kid wanted less regulations when their platform already had a forum for "Anime Death Tentacle Rape Whorehouse" is already quite disturbing. But the fact that someone found 4chan to be TOO restrictive and made something with even WORSE regulation is truly disappointing. It gives too much space for people to feel comfortable saying dangerous things (such as ideology from mass shooters) and being encouraged/enabled for it.

    1. for user in users_who_liked_our_post: display("Yay! " + user + " liked our post!") Copy to clipboard 'Yay! @pretend_user_1 liked our post!' Copy to clipboard 'Yay! @pretend_user_2 liked our post!' Copy to clipboard 'Yay! @pretend_user_3 liked our post!'

      This code looks like it would be used to notify people when others have liked their posts. Maybe not exactly how that's achieved but the results kind of reminded me of those notifications, and when you're someone who has a lot of active followers and get a lot of likes, I would imagine this code would make it easier to make notifications at a larger scale.

    1. As you can see, TurboTax has a limit on how long last names are allowed to be, and people with too long of names have different strategies with how to deal with not fitting in the system.

      Is it necessary to have a character limit for personal information like this? I feel like in a way, that can be a little demeaning for those with longer last names, especially because I feel like some ethnic groups have names that might go over the character limit.

    1. What country are you from? What if you were born in one country, but moved to another shortly after? What if you are from a country that no longer exists like Czechoslovakia? Or from an occupied territory? How many people live in this house? Does a college student returning home for the summer count as living in that house? How many words are in this chapter? Different programs use different rules for what counts as a “word” E.g., this page has “2 + 2 = 4”, which Microsoft Word counts as 5 words, and Google Docs counts as 3 words.

      This definitely opened my perspective on data constraints. In the reflection before, I figured that the best way to store information for social media would be through pre-set categories (for things like relationship status, address, etc), but there are definitely important details that can be hard to simplify and cut out (though, I'm sure no one needs additional details on someone's relationship status). I guess that's why there are some instances where you're able to put down a permanent address and a temporary address for those who are only residing somewhere for a short-term opportunity.

    1. But Kurt Skelton was an actual human (in spite of the well done video claiming he was fake). He was just trolling his audience. Professor Casey Fiesler [c16] talked about it on her TikTok channel:

      This actually got me. When I watched the first video and it was "revealed" that the creator was "not real," I fully believed it because I've definitely been tricked by AI videos before. After watching the second video by Professor Casey Fiesler, I was genuinely shocked, both because I fell for it so fast and because I felt double tricked. I might've believed it as fast as I did because I've never seen any of Kurt Skelton's videos before, so it was easy for me to imagine that he might've just been curated by another creator. Maybe had I seen that creator's videos prior to that video, I would've been more skeptical since I would've seen a longer history of the creator and his personality. Definitely a shocker to me either way though!

    1. Does the fact that it is a bot change how you feel about its actions?

      Not really. I think in a lot of controversial cases involving bot behavior are often not excused because at the end of the day, someone programmed it. Maybe it's because I don't know exactly how bots work, but I feel like a bot's actions are limited to what its programmer allows it to do, so if a bot is able to do something (such as post racist comments, as explained in an earlier section of this chapter), then something in its code allowed it to process racist information and express it. In the case from earlier this chapter, it probably should've been considered that if the bot is using other twitter users' tweets toward the company and that information is public, there probably should've been safeguards to detect racial remarks and block it from the bot's vocabulary and processing. It could be argued that that scenario slipped from their minds, but I think that anything that will be released to the public should be tested and be given feedback from a smaller community before released to the mass public. Surely someone would've thought that the bot could pick up racist tweets.

    1. One final note we’d like to make here is that, as we said before, we can use ethics frameworks as tools to help us see into situations. But just because we use an ethics framework to look at a situation doesn’t mean that we will come out with a morally good conclusion.

      I feel like this statement in itself is a little contradictory to what this entire chapter is about. We've learned about multiple different kinds of ethical frameworks there are, which in turn means that we've learned that what might be morally right and wrong differs for everyone. Nihilism in itself rejects the existence of morally good conclusions, so how can we say that just because we use an ethical framework to look at a situation, doesn't mean we will come out with a morally good conclusion, when everyone's definition of a "morally good conclusion" differs? I initially agreed with the statement, but upon further processing, I think it'd be much harder to firmly decide a situation to be a morally bad conclusion since the opinion can fluctuate depending on who you ask and where. It doesn't make much of a difference to consider someone's ethical framework if it differs from our own if we're still going to make a definitive conclusion that their situation is morally bad from our own framework.

    1. How do you think about the relationship between social media and “real life”?

      In contrast to how people are saying "social media isn't real life," I think that people tend to be a more exaggerated version of themselves when engaging online. I think this might be because they don't feel an immediate consequence for poor or extreme behavior alongside the fact that it's very easy to just say whatever you want and click post. It's easy to post impulsive thoughts and not think about how others would react, especially when you're not physically seeing any effects in your real life. So in a way, I think that social media is a part of our real life, but in a much more exaggerated way now that social pressure is generally lightened.

    1. What do you think is the responsibility of tech workers to think through the ethical implications of what they are making? Why do you think the people who Kumail talked with didn’t have answers to his questions?

      Once we're in a certain headspace (especially if it's related to something we're really excited about), we can overlook even the simplest things. However, it should be our responsibility to seek outside perspectives and test other reactions to our inventions/creations in order to gain new perspectives and make sure that our products are ready for introduction. I think this might have been the issue that occurred when the people Kumail talked with didn't have answers to his questions. They didn't think to seek feedback on their new products which led to a narrow concept.