36 Matching Annotations
  1. Dec 2023
    1. As a social media user, we hope you are informed about things like: how social media works, how they influence your emotions and mental state, how your data gets used or abused, strategies in how people use social media, and how harassment and spam bots operate.

      I can safely say that I have learned plenty from this course and further steps when navigating through social media. I think that some of the contents of this course opened me up to think more about the "bots" within these platforms. I can definitely start to see more of them start to appear - in especially the comments section of TikTok. I wish we covered more about what happens when companies sell our data.

    1. Reddit Praw library (posting, searching, etc.)

      While I thought that learning through the Reddit API was very interesting, I wish we went through other platforms. I know fro, a previous data science class that the Spotify API is fairly easy to access, so just working on more platforms would've been more enjoyable for sure. I also think that some type of webscraping would've been nice to learn since we did do a lot of extraction.

  2. Nov 2023
    1. It wasn’t designed for what kids around the world would actually want. They didn’t take input from actual kids around the world. OLPC thought they had superior knowledge and just assumed they knew what people would want.

      The arrogant and discriminatory assumptions behind the One Laptop Per Child initiative are emphasized in this quotation. Regarding which features would be most suitable and helpful, the leaders did not confer with the youngsters, who are the real users. They assumed that their understanding of technology was far superior to the reality of life faced by youngsters in impoverished nations. As a result of the devices' poor fit for actual requirements, this assumption ruined the project.

    1. CEOs of social media companies, under pressure from the board of directors, might also make decisions that prioritize short-term profits for the shareholders over long-term benefits, leading to what author Corey Doctorow calls the “Enshittification” of platforms (See his article: The ‘Enshittification’ of TikTok: Or how, exactly, platforms die., also archived here).

      This quotation draws attention to a significant problem with capitalism and publicly listed businesses. Profits are prioritized over all other considerations, which over time could negatively impact user experience and quality. This tendency is characterized by the word "enshittification". Although capitalism fosters innovation, other stakeholder interests must be balanced with its unrelenting concentration on profits.

    1. The term “cancel culture” can be used for public shaming and criticism, but is used in a variety of ways, and it doesn’t refer to just one thing.

      This concept of "cancel culture" gets taken too far sometimes. Even if someone is reflecting their own opinion which may have some controversy, the individual might be cancelled. It's important to differentiate between holding someone accountable for genuinely harmful actions and penalizing someone for merely expressing a viewpoint that doesn't align with the majority.

    1. Shame is the feeling that “I am bad,” and the natural response to shame is for the individual to hide, or the community to ostracize the person.

      I think that this needs to be talked about as a bigger issue. Some parents shame and guilt their kids in hopes of having their kids achieve better results, but this approach can be incredibly damaging. It instills a deep-seated sense of inadequacy and low self-worth in children, which can lead to long-term psychological effects.

    1. Well, individuals can block or mute harassers, but the harassers may be a large group, or they might make new accounts.

      There should be some automation to stop harassment I think. If the user enters some type of text containing ___, the comment is immediately removed. This kind of proactive moderation could be a game changer in reducing the prevalence of online harassment. While freedom of speech is important, there needs to be a balance where the online environment is safe and inclusive for everyone.

    1. When Amnesty International looked at online harassment, they found that: Women of colour, (black, Asian, Latinx and mixed-race women) were 34% more likely to be mentioned in abusive or problematic tweets than white women. Black women were disproportionately targeted, being 84% more likely than white women to be mentioned in abusive or problematic tweets.

      This is truly unbelievable. The fact that there's harassment at all is absurd and on top of that, targeted groups receive it way more than others. It reflects deeply ingrained biases and systemic issues within our society. The internet (which has the potential to be a space for positive communication and exchange) is instead being used as a platform for showing hate/discrimination.

    1. This small percentage of people doing most of the work in some areas is not a new phenomenon. In many aspects of our lives, some tasks have been done by a small group of people with specialization or resources. Their work is then shared with others. This goes back many thousands of years with activities such as collecting obsidian and making jewelry, to more modern activities like writing books, building cars, reporting on news, and making movies. { requestKernel: true, binderOptions: { repo: "binder-examples/jupyter-stacks-datascience", ref: "master", }, codeMirrorConfig: { theme: "abcdef", mode: "python" }, kernelOptions: { kernelName: "python3", path: "./ch16_crowdsourcing" }, predefinedOutput: true } kernelName = 'python3'

      This completely makes sense. As referenced in the Wikipedia article, people aren't visiting the website to actually publish content, but rather just learn stuff. I think this is redundant information as it's obvious - if you were to go somewhere IRL, you wouldn't go there to work, but rather for your own sake (maybe to purchase an item). Even 1% for the Wikipedia Editors seems like a lot to since I feel as though there are way more people visiting wikipedia in ratio to the amount of people editing.

    1. Amazon Mechanical Turk: A site where you can pay for crowdsourcing small tasks (e.g., pay a small amount for each task, and then let a crowd of people choose to do the tasks and get paid).

      Side note: I actually plan to use Amazon Mechanical Turk soon. I'm in the CSE Makeability Lab and we're building a product that involves ARVR with a Running Assistant. Using Amazon Mechanical Turk, we plan to get users opinions on certain designs and get some general input from them. This platform is a great way to get a large amount of people to finish your survey (even if you have to pay them x amount).

    1. Another strategy for content moderation is using bots, that is computer programs that look through posts or other content and try to automatically detect problems. These bots might remove content, or they might flag things for human moderators to review. { requestKernel: true, binderOptions: { repo: "binder-examples/jupyter-stacks-datascience", ref: "master", }, codeMirrorConfig: { theme: "abcdef", mode: "python" }, kernelOptions: { kernelName: "python3", path: "./ch15_moderators" }, predefinedOutput: true } kernelName = 'python3'

      Sometimes I think this may actually prove as a problem. I know that automated moderators follow a certain algorithm, but they aren't necessarily always accurate. If there's a certain word blocked and the user replaces a letter with a related number, this isn't always catched. I think there should be better algorithms to stop certain query from being passed rather than just specific text based (remove "__" word).

    1. Facebook uses hired moderators to handle content moderation on the platform at large (though Facebook groups are moderated by users). When users (or computer programs) flag content, the hired moderators will look at it and decide what to do.

      I feel like companies don't do this enough. These multi-billion dollar companies can definitely hire for part-time moderators and easily integrate them into the security of their platform. While it may be a slight pay required, there's a huge incentive for both sides as the user gets paid for doing work and the company keeps friendly platform for users to use. An example of a working case I've seen is through the generic "Discord mods" who aren't necessarily always paid, but they do typically keep a good job of keeping the server stable and in check (can be applied to a large-scale platform like Facebook as well).

    1. Munchausen Syndrome (or Factitious disorder imposed on self) is when someone pretends to have a disease, like cancer, to get sympathy or attention

      This is absurd, but unfortunately I have seen it before. I'm not sure how people are ethically okay with themselves by pretending to have some type of health condition in the hopes of receiving attention. The worst part is that 9 times out of 10 it'll work and they'll end up getting the attention because there's no way for society/viewers to actually confirm if the user does or does not have the condition. I think a rule of thumb is to be cautious , and don't believe the first thing you see.

    1. and yet they still allowed teenage girls to use Instagram.

      As social media is one of the biggest causes of mental health for this targeted group (teenage girls), how come they don't just terminate the account? I know that it may be wasteful as chances are that they'll create another one, but I think that the termination factor will incentivize individuals to not persist with using Facebook in this case.

    1. Additionally, white Americans often use images and gifs of Black people reacting and expressing emotions. This modern practice with gifs has been compared to the earlier (and racist) art forms of blackface, where white actors would paint their faces black and then act in exaggerated unintelligent ways.

      I don't think I've ever seen this type of practice before. Is it common? I feel like I'm on social media a lot, and have never seen exclusively white people using African Americans to explain certain gifs. In addition to the fact that I haven't seen the modern perspective, I haven't heard of blackface either. It's probably a good thing either way as it does seem racially motivated...

    1. The spread of these letters meant that people were putting in effort to spread them (presumably believing making copies would make them rich or help them avoid bad luck)

      This process seems extremely strenuous and I feel like it wouldn't work that well. In comparisons to something like modern day scams, I don't think there's any type of incentive for the person sending these letters. Also, I would love to see some statistic to see how effective these letters are actually, and what is the purpose of writing them (from the writer themselves).

    1. Recommendations for friends or people to follow can go well when the algorithm finds you people you want to connect with.

      I wish there was some type of way for these recommendation algorithms to build full profiles on each of your friends. From my perspective, not all of your friends may have the same interests, and the algorithm might be misconveyed that if this entire friend group is friends with each other, they all like soccer. It's just an example of how even though it's typical that friend groups are formed on the basis of a similar interest, they're media interest could easily skew these algorithms for the rest of the friends.

    1. Similarly, recommendation algorithms are rules set in place that might produce biased, unfair, or unethical outcomes. This can happen whether or not the creators of the algorithm intended these outcomes.

      I feel like many people misunderstand this. Recommendation algorithms can never be 100% accurate as the algorithm is just based off feeding the data given. If the data from a user is more lenient towards Soccer one day and the other day it's Suits, there's only so much of a profile it can build off you. It's a huge misconception hat they produce biased outcomes, but they're simply built upon the data they're provided, so I don't think they're actually that biased in my opinion.

  3. Oct 2023
    1. In universal design, the goal is to make environments and buildings have options so that there is a way for everyone to use it

      While the goals of universal design are noble, truly accessible spaces require more than just technical compliance. Designers must engage with disabled communities, understand their diverse needs, and ensure inclusion goes beyond minimum standards. It's not enough to just have ramps and automatic doors (there must be a deeper commitment to making spaces welcoming and usable for all).

    1. Some people (like many with chronic pain) would welcome a cure that got rid of their disability. Others (like many autistic people), are insulted by the suggestion that there is something wrong with them that needs to be “cured,” and think the only reason autism is considered a “disability” at all is because society doesn’t make reasonable accommodations for them the way it does for neurotypical people.

      This quotation emphasizes a significant difference in the perspectives of various challenged cultures about their disability. Some people may be looking for a "cure," but others accept their disability as a part of who they are. It's a complex topic, so we have to be careful not to assume that everyone with a disability feels the same way about it.

    1. Metadata: Sometimes the metadata that comes with content might violate someone’s privacy.

      This is a really interesting example that highlights how metadata can unintentionally (not primary goal) reveal private information. Even if content itself is anonymized, metadata like geotags can give away identities and locations. Slightly unrelated but after reading into McAfee, the CEO had quite the story that ended up leading to his death...

    1. Hacking attempts can be made on individuals, whether because the individual is the goal target, or because the individual works at a company which is the target. Hackers can target individuals with attacks like:

      Adding to this: It's also important to note that different hackers have different objectives. The two main ones I'm aware of are hackers that tend to go for high-profile subjects to get a message out of their account and stir up some type of news. The other type of hacker focuses on accounts with a couple hundred followers and try to convert it to their own account (essentially grabbing the account for the small following base to show them their product).

    1. Data can be poisoned intentionally as well

      I actually think that this happens more often that the amount people expect it to. It's very easy to manipulate and change data to make it in favor of your argument and the reader wouldn't be able to distinguish through that lie. That's why it's important to always have some sight of being skeptical when coming across a datasets (especially large ones).

    1. Additionally, groups keep trying to re-invent old debunked pseudo-scientific (and racist) methods of judging people based on facial features (size of nose, chin, forehead, etc.), but now using artificial intelligence.

      This quote is slightly concerning to say the very least. In one of my classes that discusses the Bias in Algorithms, I definitely start to worry about the potential for abuse if we start to let AI make judgements about people based on physical features. As of now, GPT models stay away from providing that type of input to users, but there's work arounds for sure.

    1. f the only way to get a moral pass on this type of trolling is to choose an ethical framework that tells you harming others doesn’t matter, then it looks like this nihilist viewpoint isn’t deployed in good faith

      If the only way to justify harmful actions is by adopting a framework that ignores harm, then I think it's essentially saying you're not really engaging sincerely with ethics. Instead, you're just looking for a way out or excuse. Adding on, a person who believes they are thoughtful will already be aware of possible ethical stances.

    1. In these games, you would come across other players and could type messages or commands to attack them.

      This sentence interests me because as a player of modern multiplayer games, it's fascinating to consider how the ability to anonymously interact with and antagonize other players enabled harmful behaviors like griefing and flaming even in the text-based environments of early games. Though multiplayer games have evolved a lot since the MUD days, this potential for these types of interactions definitely still exists.

    1. This trend brought complicated issues of authenticity because presumably there was some human employee that got charged with running the company’s social media account.

      The trend of companies adopting informal social media personas shows interesting questions around authenticity. On one hand, having an approachable brand voice on social media can help companies connect with customers. But there is often a real person behind the account, charged with maintaining that fun persona. I think it's probably challenging to express the brand's desired tone consistently, especially if it doesn't align with their own personality.

    1. How do you notice yourself changing how you express yourself in different situations, particularly on social media?

      When I think about how I express myself on social media versus in real life, I realize my persona doesn't change very much across contexts. I tend to share openly and authentically whether I'm posting online or talking with friends. My views and sense of humor remain fairly consistent. Of course, I'm a bit more filtered online since I'm conscious of speaking to a broader audience. But for the most part, I feel comfortable being myself no matter the situation.

    1. 8Chan (now called 8Kun) is an image-sharing bulletin board site that was started in 2013. It has been host to white-supremacist, neo-nazi and other hate content.

      Based on some of my previous knowledge, I know that 8chan has frequently struggled to find web hosting and domain registration services due to this type of harmful content. Many major internet and web services have banned 8chan because of its hate speech and other policy violations. This shows how toxic online communities like 8chan that refuse to moderate illegal/dangerous content can face serious consequences.

    1. These pages had limited interaction, so you were more likely to load one thing at a time and look at a separate page for each post or piece of information.

      This bullet point highlights how early social media on Web 1.0 was much less interactive than today's platforms. Users couldn't seamlessly view streams of content, but had to load separate pages for each post. The experience seemed to be way more fragmented compared to modern social feeds.

    1. Even if you are not a utilitarian, it is good to remind ourselves to check that we’ve got all the data before doing our calculus.

      This resonated with me because it highlights the importance of having comprehensive information before making ethical judgments. We must make an effort to consider all relevant perspectives and potential impacts, not just data that confirms our assumptions. In my opinion, seeking complete data is crucial for moral reasoning.

    1. All data is a simplification of reality.

      This quote stood out to me because it highlights how data representation inherently involves tradeoffs. When we create datasets, we make choices about how to abstract the complexity of the real world. These choices enable certain insights while obscuring others (data isn't ever a perfect replica of reality).

    1. “Gender Pay Gap Bot”

      The description of the Gender Pay Gap bot using automation to call out inequality really stood out to me. This bot seems justified in its goal of drawing attention to pay discrepancies. However, I wonder if the public shaming method could breed resentment rather than actual change (ensuring the bot can withstand any backlash).

    1. Python is in a group of programming languages called imperative programming languages

      In addition, I've learned from a previous CS class that imperative programming languages like Python use statements that run sequentially. This matches what the reading says about Python being an imperative language where you list out steps that run in order. However, other languages like HTML and CSS work differently as they are declarative rather than imperative. These declarative languages focus on describing the end result, without needing to specify each step sequentially.

  4. Sep 2023
    1. There are many more ethics frameworks that we haven’t mentioned here. You can look up some more here.

      One additional ethic frameworked I researched is Communitarian Ethics.

      • It primarily focuses on the shared values and goals of a community (what is ethical is what strengthens the community).
      • The community defines morality through traditions, stories, and practices. Individuals find meaning through participating in the community.
      • Critiques liberal individualism. Individuals only exist within social contexts. There is no abstract, ideal individual.
      • Key Figures: Charles Taylor (Canada), Alasdair MacIntyre (UK), Michael Sandel (USA)

      I tried to keep the same format as the previous ethical frameworks mentioned.

    1. Justine lost her job at IAC, apologized, and was later rehired by IAC.

      This is simply absurd in my opinion. As shown from the beginning of the article, Justine was a PR director which is incredibly ironic that someone that has such a high role working in PR specifically makes comments like these. The whole concept of Public Relations (PR) is to manage how others perceive or feel about a company, brand, etc. Coming from the director themselves to make such a comment followed with somehow getting rehired by the same company seems very unreasonable. I'm wondering if the IAC still rehired her for the same position...