20 Matching Annotations
  1. Dec 2024
    1. You aren’t likely to end up in a situation as dramatic as this. If you find yourself making a stand for ethical tech work, it would probably look more like arguing about what restrictions to put on a name field (e.g., minimum length), prioritizing accessibility, or arguing that a small piece of data about users is not really needed and shouldn’t be tracked. But regardless, if you end up in a position to have an influence in tech, we want you to be able to think through the ethical implications of what you are asked to do and how you choose to respond.

      It emphasizes the vital importance of ethical decision-making in our daily tech work, even in areas that might seem minor, like accessibility or data tracking. It's inspiring to think about how small, intentional actions can help foster a more ethical tech landscape. Additionally, collective organizing such as through the Alphabet Workers Union, underscores the significance of solidarity and shared responsibility in effecting real change within the industry.

  2. Nov 2024
    1. 18.2.1. Aside on “Cancel Culture”

      Cancel culture could be understood through the lenses of social dynamics and digital power structures. One notable aspect is that of how social media platforms amplify the scale and intensity of public shaming, enabling individuals and groups to mobilize around perceived offenses in virtual real time. Furthermore, the phenomenon underlines tensions between accountability and mob justice since cancel culture very often means the line separating punishment from disproportionate revenge gets blurred. Last but not least, long-term psychological and social effects among the canceled persons and their critics are often ignored, which raises questions about the potential for such collective actions to create significant changes or even dialogue.

    1. When looking at who contributes in crowdsourcing systems, or with social media in generally, we almost always find that we can split the users into a small group of power users who do the majority of the contributions, and a very large group of lurkers who contribute little to nothing. For example, Nearly All of Wikipedia Is Written By Just 1 Percent of Its Editors, and on StackOverflow “A 2013 study has found that 75% of users only ask one question, 65% only answer one question, and only 8% of users answer more than 5 questions..” We see the same phenomenon on Twitter:

      A particular imbalance of participation related to crowdsourcing systems is the idea of "Power Users and Lurkers." This is when a tiny group of users, which are power users themselves, create most of the content, while a larger and usually passive group of lurkers consumes the content. Such dynamics beg questions regarding representation, bias, and even influence within such digital spaces. For example, in systems where only a small percentage of users are active creators, the information landscape disproportionately reflects the views and priorities of that vocal few. This is in no way unique to the modern internet; echoes of it exist in how ancient practices of knowledge and skill-based activities-such as writing or construction-were usually concentrated within specialized groups.

      The same pattern today holds at Wikipedia and Twitter, where a few percent of contributors provide content for the millions of users. That's the paradox: community-driven platforms rely on the few, which might result in either the echo chamber effect or limitation of diversity in opinions. This will require either nurturing a culture of more balanced engagement or finding ways to ensure the needs and views of lurkers are represented in the content created by power users.

    1. 16.3.3. Social and political movements# Some ad hoc crowdsourcing can be part of a social or political movement. For example, Social media organizing played a role in the Arab Spring revolutions in the 2010s, and Social Media platforms were a large part of the #MeToo movement, where victims of sexual abuse/harassment spoke up and stood together.

      It is this power that ad hoc crowdsourcing carries in social and political movements that keeps the role of digital spaces ever-evolving-from fostering commonalities to visibility for voices that might otherwise go unheard. This is evident from movements like the Arab Spring and #MeToo, proving how, besides being tools for communication, social media can be a passage to transformation. These networks allow people to connect across borders, mobilizing people around causes that in the past were restricted by geography and also by centralized leadership. The real-time nature of social media can help keep momentum alive, with every post or hashtag adding to a cumulative narrative. In this way, the emotional pull of such movements often assumes an air of immediacy and urgency which traditional media seldom can match. Now, it is in finding that delicate balance between empowered voices and misinformation, where the openness allowing these movements to be born and grow can also make them very vulnerable to manipulation.

    1. One concept that comes up in a lot of different ethical frameworks is moderation. Famously, Confucian thinkers prized moderation as a sound principle for living, or as a virtue, and taught the value of the ‘golden mean’, or finding a balanced, moderate state between extremes. This golden mean idea got picked up by Aristotle—we might even say ripped off by Aristotle—as he framed each virtue as a medial state between two extremes. You could be cowardly at one extreme, or brash and reckless at the other; in the golden middle is courage. You could be miserly and penny-pinching, or you could be a reckless spender, but the aim is to find a healthy balance between those two. Moderation, or being moderate, is something that is valued in many ethical frameworks, not because it comes naturally to us, per se, but because it is an important part of how we form groups and come to trust each other for our shared survival and flourishing.

      Moderation can feel particularly hard to come by in today's world, when social media exaggerates extremes. The platforms reward the most sensational, the most polished, the most dramatic versions of ourselves, pushing us to chase ideals often not in kilter with who we are in our real skin. In the never-ending cycle of likes and followers, it is so easy to get unmoored from balance and moderation-qualities the ancients prized above all others as the secret to the good life.

      Now, imagine if we approached social media based on a philosophy of "golden mean." Instead of perfection or sensationalism, we did the best we could to build relationships and honor our souls. It could be a place where, instead of only posting our highlights, we shared the real moments, too, finding it courageous to be vulnerable rather than needful of a curated facade. The moderation of social media would involve resisting the pull to over-share or compare and, in turn, decide to use it mindfully-as a tool to enhance our lives, not control them. Maybe then, social media can be a place where authenticity is celebrated, and together we can find a little more peace in balance.

    1. While taking a break from parts or all of social media can be good for someone’s mental health (e.g., doomscrolling is making them feel more anxious, or they are currently getting harassed online), viewing internet-based social media as inherently toxic and trying to return to an idyllic time from before the Internet is not a realistic or honest view of the matter.

      The "digital detox" movement shows the need for a sense of control and balance in an always-on world. While breaks from social media can help those suffering from doomscrolling fatigue or online harassment, framing the internet as inherently toxic, or perhaps desiring some kind of pre-digital "idyllic past," may be oversimplification. Perhaps, rather than complete disconnection, the focus should go to how to design healthier online spaces that support users' mental well-being, with built-in tools promoting mindful usage and positive interactions.

      Furthermore, just as narratives of colonial "wilderness" seemed to deny the presence of the Indigene, so too does any digital detox trend risk overlooking the marginalized voices that rely on social media for their presence, support, and advocacy. That suggests an imperative for an approach to digital wellness which honors both mental health and the vital connections that digital spaces provide. By incorporating such perspectives, we enable the creation of a digital landscape that enables rather than disables users without requiring utter disconnection.

    1. 12.1.2. Memes

      Exploring memes within the framework of cultural evolution highlights how ideas propagate similarly to genes, but with a captivating difference—memes are influenced by human intention. In contrast to biological evolution, which relies on random mutations, memes are frequently designed and modified by people. This comparisons creates a rich interplay between natural selection and human ingenuity, allowing for smoother adaptation and even 'directed evolution' of concepts. This characteristic empowers memes to adjust almost immediately to social trends and changes in collective awareness.

  3. Oct 2024
    1. 11.3.1. How recommendations can go well or poorly

      These examples further reveal the complexity of algorithmic recommendations, balancing between beneficial and hurtful outcomes. Whereas recommendations that link users to new friends, or ads revealing relevant content can increase engagement and satisfaction, recommendations can easily go awry when they surface sensitive content or remind users of traumatic events, therefore linking them with unethical people. This further underscores the integration of context awareness and ethics within algorithms to avoid causing potentially distressing experiences. The secret to responsible recommendation practices probably lies in a good balance between personalization and sensitivity to user well-being.

    2. Knowing that there is a recommendation algorithm, users of the platform will try to do things to make the recommendation algorithm amplify their content. This is particularly important for people who make their money from social media content. For example, in the case of the simple “show latest posts” algorithm, the best way to get your content seen is to constantly post and repost your content (though if you annoy users too much, it might backfire).

      Strategies that depend on the posting of content constantly to increase visibility come with the consequence of a 'quantity over quality' approach. This could be very destructive towards the content ecosystem of social media platforms. In fact, creators are under pressure with the need to post frequently in order to stay relevant, which might make them compromise on the depth and authenticity of the content. Besides impacting the mental well-being of creators, this could also lead to audience fatigue, where followers start disengaging from such posts due to the repetition or overwhelming feeling of them.

    1. Another way of managing disabilities is assistive technology, which is something that helps a disabled person act as though they were not disabled. In other words, it is something that helps a disabled person become more “normal” (according to whatever a society’s assumptions are). For example:

      This section points out the very complex issues in assistive technology-between empowerment and the compulsion of making the disabled person "normal." As much as glasses and wheelchairs can offer independence, this reflects an ableism in how disabled people should adapt to the able-bodied world. The fixation on "fixing" the person, rather than bettering accessibility, can be emotionally exhausting and engender a sense that disability is something not to be. Moreover, assistive technologies are too expensive, a factor that acts as a barrier to access by the very individuals who need them, therefore raising equity concerns. Practices ranging from ABA therapy to conversion therapy evoke the dangers when interventions prioritize normalization over acceptance, sometimes with severe harm. The time is ripe to turn the tide of the conversation in favor of diversity and include systems that promote all abilities rather than dictate that people with disabilities fit into narrow definitions of functionality. True inclusion is about changing society's attitude and environment, not changing people.

    1. Hacking attempts can be made on individuals, whether because the individual is the goal target, or because the individual works at a company which is the target. Hackers can target individuals with attacks like:

      This really emphasizes that cybersecurity goes beyond just technology; it also involves human behavior. People often reuse passwords for convenience, unaware of how easily that habit can be exploited. It’s both fascinating and a bit frightening how trust can be manipulated—take the example of the NSA impersonating Google. Social engineering serves as a perfect reminder that hackers don’t always need sophisticated tools; sometimes, they just need to deceive people into trusting the wrong thing. Phishing emails and fake QR codes are particularly clever because they depend on people acting quickly without thinking. The reference to Frank Abagnale from Catch Me If You Can re

    1. Datasets can be poisoned unintentionally. For example, many scientists posted online surveys that people can get paid to take. Getting useful results depended on a wide range of people taking them. But when one TikToker’s video about taking them went viral, the surveys got filled out with mostly one narrow demographic, preventing many of the datasets from being used as intended.

      This poisoning of the dataset is going to connect with the Deepfake issue in Korea, as data manipulation and biased inputs have severe consequences in terms of outcomes. Applications in deepfakes rely on enormous datasets of images and videos that feed into algorithms that generate realistic fake content. If these datasets are contaminated with biased or skewed data-intentional or unintentional-such as overrepresentation within certain demographics, then the outcomes will be very problematic. On one hand, it is similar that viral survey participation undermines data reliability, while deepfake datasets poisoned with biased material lead to applications considered very unethical or harmful, such as creating non-consensual videos or misinformation. In fact, these two examples drive the message home regarding the risk involved with unregulated data inputs in critical systems.

    1. One particularly striking example of an attempt to infer information from seemingly unconnected data was someone noticing that the number of people sick with COVID-19 correlated with how many people were leaving bad reviews of Yankee Candles saying “they don’t have any scent” (note: COVID-19 can cause a loss of the ability to smell):

      This is really creative in leveraging unusual data to find a trend. Who would have imagined that COVID-19 cases are correlated with negative reviews of Yankee Candle-a proxy for not being able to smell, a known symptom-from product reviews? Indeed, an interesting case of how sometimes unrelated streams of data can point to a pattern or predict a trend in public health. It also underlines that any data correlations have to be subject to critical examination in order not to jump to misleading conclusions, as in the case of negative reviews, the reasons could be altogether different. This is a good example of how strong and/or limited data interpretation can be in a real-life setting.

    1. Does anonymity discourage authenticity and encourage inauthentic behavior?

      Anonymity can indeed impede inauthenticity and even falsification of actions. Therefore, they start to exhibit behavior that would normally be against their self-interest even in identifiable settings. This reduced sense of moral judgement might lead people to act unethically and untruthfully in the society. This is because in a case where individuals do not operate under any rules, expectations or standards of society when in a crowd, only self will is the standard to use any behavior or speech with no fears of being put at risk either social or in any other way. Similarly, anonymity may also produce the opposite effect consequently. It can actually provoke a person's honesty in communication with others as there is no fear of being misjudged. The real concern is in those places where the pressure is intense and consequently, there are greater motives for faking.

    1. Separately, in 2018 during the MeToo movement, one of @Sciencing_Bi’s friends, Dr. BethAnn McLaughlin (a white woman), co-founded the MeTooSTEM non-profit organization, to gather stories of sexual harassment in STEM (Science, Technology, Engineering, Math). Kyle also followed her on Twitter until word later spread of Dr. McLaughlin’s toxic leadership and bullying in the MeTooSTEM organization (Kyle may have unfollowed @Sciencing_Bi at the same time for defending Dr. McLaughlin, but doesn’t remember clearly).

      Authenticity is crucial in maintaining engagement and trust in online communities, and when that trust is broken, it can lead to a negative impact on one's social presence. Especially in this circumstance seeing a courageous and inspiring character fall down like this. It intended to help people who are being oppressed but it can't help with this loss of authenticity.

    1. Designers sometimes talk about trying to make their user interfaces frictionless, meaning the user can use the site without feeling anything slowing them down.

      Yes, that is important when navigating the need to create a program that is 'user-friendly." Users can navigate more efficiently when an interface clearly communicates how it should be interacted with could reduce errors and frustrations significantly. Strong feature support promotes product success by facilitating ease of use and increasing user satisfaction, which is key in today’s competitive digital environment.⁤

    1. Can you think of an example of pernicious ignorance in social media interaction? What’s something that we might often prefer to overlook when deciding what is important?

      An example of pernicious ignorance in social media often appears when people share misinformation without recognizing its harmful effects, such as reinforcing stereotypes. Users may overlook the impact of spreading biased content, focusing instead on gaining likes or engagement. This could have detrimental effects of neglecting the ethics.

    1. Ethics: Thinking systematically about what makes something morally right or wrong, or using ethical systems to analyze moral concerns in different situations

      Therefore, it refutes the idea of "Machine makes more objective decisions than human beings." While machines may process data without human emotions or biases, the reality is that these machines rely on programming codes and algorithms designed by humans. This means that any biases, assumptions, or limitations present in the programmer's thinking can be embedded into the machine's decision-making process.

    1. Bots present a similar disconnect between intentions and actions. Bot programs are written by one or more people, potentially all with different intentions, and they are run by others people, or sometimes scheduled by people to be run by computers.

      This could happened to be ethical dilemmas because of the division of responsibility: It can take on new roles or produce results that were initially unanticipated. This complexity shows how the lines between human intent and machine behavior can blur in an automated environment.

    1. Bots, on the other hand, will do actions through social media accounts and can appear to be like any other user. The bot might be the only thing posting to the account, or human users might sometimes use a bot to post for them.

      that could be scary in some circumstances because it could replace human beings at any time, The cybersecurity issue seems going to be strengthened due to the distinction between true and false information becoming more difficult and may even encourage the spread of false information.