33 Matching Annotations
  1. Last 7 days
    1. How have your views on automation and programming changed (or been reinforced)?

      My view of automation and programming has changed because I now see them as ethical as well as technical. Code does not just solve problems; it also reflects choices about whose needs matter, what behaviors get rewarded, and what harms may be scaled.

    2. How have your views on social media changed (or been reinforced)?

      This course reinforced my view that social media is not neutral, because platform design and algorithms shape what people see and how they interact. I now pay more attention to how engagement-driven systems can encourage outrage, misinformation, and unhealthy online behavior.

    1. What if government regulations said that social media sites weren’t allowed to make money based on personal data / targeted advertising? What other business models could they use? How would social media sites be different?

      If social media companies were not allowed to profit from personal data and targeted advertising, they would probably need to rely more on subscriptions, public funding, donations, or smaller non-targeted ads. I think this could make platforms less addictive and less focused on surveillance, although it might also make some services less convenient or less accessible to people who cannot afford to pay.

    1. In what ways do you see capitalism, socialism, and other funding models show up in the country you are from or are living in?

      I see capitalism most clearly in how major social media platforms depend on advertising, data collection, and competition for user attention. At the same time, I also see non-capitalist or less profit-driven models in public services, nonprofits, and platforms like Wikipedia, which suggests that communication systems do not have to be fully organized around profit in order to be useful.

    1. Would there be any notifications sent when a tweet is retracted?

      Yes, I think people who liked, retweeted, or replied to the tweet should get a notification if it is retracted, especially if the original post included misinformation. This would help reduce further spread of the post and give users a chance to reconsider or correct what they shared.

    2. How would that retracted tweet look when viewed?

      A retracted tweet should have a clear label at the top saying that the post has been retracted by the author, so viewers immediately understand that the user no longer stands by it. I think the original content should still be visible behind a click or warning screen, because that keeps accountability while also showing the correction first.

  2. Feb 2026
    1. How do you think social media platforms should handle crowd harassment? Are there things they should do to reduce it? Should the consider whether harassment is justified in some instances?

      I think platforms should treat crowd harassment as a design problem, not just a rule-breaking issue. They could reduce harm by limiting virality during pile-ons, detecting coordinated attacks, and offering stronger tools for victims such as bulk-blocking or temporary shield modes. I don’t think platforms should try to decide whether harassment is morally justified even if someone deserves criticism, mob punishment and threats cause disproportionate harm. Accountability can exist without harassment, and platforms should enforce behavioral standards rather than moral judgments.

    2. How do social media platforms make harassment possible?

      This section made me think about how platform design directly enables harassment at scale. Features like public replies, quote-posting, and algorithmic amplification make it easy for one critical post to turn into a coordinated pile-on. Harassment online isn’t just about individual bad actors — it’s about systems that reward outrage and visibility. When engagement is prioritized over safety, harassment becomes structurally incentivized.

    1. What do you think a social media company’s responsibility is for the crowd actions taken by users on its platform?

      Social media companies have significant responsibility because they design the environment where crowd behavior occurs. While individuals are responsible for their own actions, platforms structure incentives, visibility, and amplification systems. If harmful crowdsourcing occurs e.g., harassment campaigns, misinformation swarms, coordinated hate, platforms cannot claim neutrality. Algorithms that prioritize engagement often amplify emotionally charged or polarizing content.Therefore, companies have a responsibility to: Design systems that reduce amplification of harmful crowd behavior, moderate coordinated abuse or misinformation, be transparent about how content is promoted, provide reporting and accountability mechanisms...

    2. In what ways do you think you’ve participated in any crowdsourcing online?

      I’ve participated in crowdsourcing in both obvious and subtle ways. For example, when I leave reviews on platforms like yelp or google Reviews, I’m contributing to collective knowledge that helps others make decisions. Rating products on amazon, answering questions on reddit, or even liking and sharing posts are all forms of distributed crowd input that shape algorithms and visibility.

    1. Volunteer Moderation

      I think the concept of Volunteer Moderation is really interesting because it relies entirely on the unpaid time and labor of community members to keep a platform running. It’s wild that massive sites like Reddit and Wikipedia stay functional mainly because people are willing to moderate and edit them for free.

    1. Relational Ethics Frameworks

      I think a relational approach would stop treating us like isolated data points and start treating us like a digital neighborhood that actually cares about keeping peace. Instead of just getting a 'post removed' notification, the system would focus on fixing the broken relationship between users and restoring the community's trust.

    1. What are the ways social media companies monitoring of mental health could be beneficial or harmful?

      Monitoring could be beneficial if it allows platforms to intervene and provide resources to users showing signs of crisis or self-harm. However, it becomes harmful if that sensitive psychological data is sold to advertisers or used to manipulate vulnerable users when they are at their lowest points.

    2. In what ways have you found social media bad for your mental health and good for your mental health?

      Social media can be detrimental when I fall into the trap of doomscrolling or comparing my everyday life to someone else’s curated highlight reel, which often fuels anxiety. On the other hand, it has been a great way for finding community and staying connected with long-distance friends who provide genuine emotional support.

    1. What experiences do you have of social media sites making particularly bad recommendations for you?

      It definitely will be the random content they recommend on personal page. They often create "echo chambers" that reinforce my existing beliefs while burying diverse perspectives, or they exploit emotional triggers by pushing content designed to keep me scrolling.

    2. What experiences do you have of social media sites making particularly good recommendations for you?

      When I browse on social media, I noticed that the longer I spend time on it, the recommendation will be more accurate. even sometime when I spoke about certain products/items, it will naturally recommend other related products, which I believe benefits me a lot because it save time for me to find relative sets of product.

    1. Design Justice

      It’s wild how much tech can fail when the people building it don't represent the people using it, like that soap dispenser that literally couldn't see dark skin. It really drives home the point that design justice isn't just about the final product, but about making sure disabled and marginalized people are actually the ones in the room making the decisions.

    1. Assistive technologies

      This section highlights how managing disability accessibility is shifting from putting the burden on individuals to "mask" or fix themselves toward designers creating flexible tools that actually adapt to the person. It’s a move from just making someone act "normal" with assistive tech to using ability-based design where programs change their own behavior to match what a user can do.

    1. What incentives do social media companies have to be careless with privacy?

      Companies may be careless because prioritizing rapid product growth and innovation often takes precedence over the slow, costly process of implementing rigorous security protocols. When the cost of a potential data breach settlement is lower than the profit gained from aggressive data collection, companies may view privacy risks simply as a manageable "cost of doing business."

    2. What are your biggest concerns around privacy on social media?

      My primary concern involves the creation of invasive behavioral profiles that track my interests and location without explicit, ongoing consent. I worry that this data could be leaked or sold to third parties, leading to identity theft or the subtle manipulation of my personal decisions through targeted algorithms.

    1. 8.4. How is this data used#

      Social media platforms use our data to keep us hooked and maximize their profits, showing that 'free' services often come at the cost of our time and attention. While targeted ads can be helpful, they also give platforms the power to manipulate users or even influence major events like elections.

    1. COVID-19

      It's fascinating how data mining can reveal stories from simple numbers, like the connection between COVID-19 surges and candle reviews. However, the section on spurious correlations is a great reminder that just because two trends line up—like margarine consumption and divorce rates—it doesn't mean one actually causes the other.

  3. Jan 2026
    1. What do you think is the best way to deal with trolling?

      I think the best move is to just stop giving them a reaction, since their whole goal is to disrupt the space or get someone to snap for the lulz. Engaging just feeds their need to feel smart or powerful, so it’s way more effective to ignore the bait and keep the conversation moving.

    1. lulz.

      It is interesting realizing that trolling is way more than just people being mean for "the lulz"—it can actually be a weird form of social protest or a way to make a point by exposing how gullible people are. It's basically using fake posts to mess with people's heads, whether that's to feel powerful or just to gate keep a community from "normies"

    1. How do you notice yourself changing how you express yourself in different situations, particularly on social media?

      I express differently in different situations. In real life, I often listen to another's opinion, while online communication, such as commenting on RedNote were often more sharp and acute, because people will worry less of the relationships with strangers.

    1. Does anonymity discourage authenticity and encourage inauthentic behavior?

      This could be paradoxical, as anonymity can discourage authenticity by providing a shield that allows individuals to bypass social accountability and engage in deceptive or uncharacteristic behaviors. Conversely, it can also foster a unique form of "true" authenticity by enabling people to share vulnerable thoughts and identities without the fear of real-world judgment or consequences.

    1. 5.6. Social Media Design

      I learned that affordances make apps feel natural to use, while friction is a design choice used to intentionally slow users down, like making an ad hard to close. It was surprising to find out that the creator of infinite scroll now regrets it because it removed the friction that normally helps people stop scrolling.

    1. Web 1.0

      The late 1900s marked the rise of "Web 1.0," where social interaction was often limited to personal webpages or separate text-based platforms like IRC and BBS. It is interesting to see how early communication evolved from simple email to real-time chat systems like AIM, which allowed users to manage friend lists and view online status.


    1. Dictionaries# The other method of grouping data that we will discuss here is called a “dictionary” (sometimes also called a “map”).

      I learned that dictionaries are perfect for storing labeled data, like a user's "handle" or "profile picture," because they map specific keys to values. By nesting these dictionaries inside a list, researchers can organize information for thousands of different social media users in one structured format.

    1. Dates and Times#

      I find the discussion about the ambiguity of yesterday particularly insightful because it highlights how objective data like a timestamp is actually dependent on the observer's context, if a social media platform's automated system flags behavior based on a specific day, but that day starts and ends at different times for the user and the server. This creates a data friction that can lead to unfair outcomes, which indicates that information systems aren't neutral tools, they are specific temporal assumptions, and things might change quickly and might not reflect the lived experience of global users.

    1. Loops

      The use of loops and conditionals in social media bots highlights a major ethical challenge regarding scale and context. While a human can manually block a few trolls, a bot using a loop can automatically block hundreds of users in seconds based on a simple conditional like an iphone or android tag. This demonstrates how automation doesn't just make tasks easier but changes the power dynamics of online interaction by allowing a single user to exert massive influence without manual effort.

    1. Reflection questions

      It is striking how specific design features like the location tag ('Hillingdon, London') transformed a digital post into a real-world manhunt. This case proves that platform features are never neutral; by automatically attaching metadata, Twitter provided the tools necessary for users to track Sacco’s flight in real-time, blurring the line between online discourse and physical safety.

    1. There are many more ethics

      I would like to add the framework of Utilitarianism (Jeremy Bentham and John Stuart Mill) to this section. Utilitarianism focuses on the outcomes of actions, aiming for 'the greatest good for the greatest number.' This is particularly relevant to social media algorithms that prioritize overall engagement and user satisfaction metrics, though it often raises ethical concerns when the 'minority' users' experiences are sacrificed for the majority's data trends.