29 Matching Annotations
  1. Nov 2023
    1. Tech industry leaders in Silicon Valley then take what they made with exploited labor, and sell it around the world, feeling good about themselves, believing they are benefitting the world with their “superior” products.

      This is interesting when one considers the intentional disintegration built into many tech products, which promote waste and consumerism by unnecessarily breaking down over time, in tandem with marketing involving the supposed necessity of the latest update of a prior product, in order to ensure consumers are in constant need of new technology. Factors such as social media and digitization further the integral role these devices play in the everyday lives of many, their wastefulness contradicting the image of reusability and sustainability often associated with digitized versions of analog products (i.e. digital texts rather than paper ones, digital music, finances, etc.).

    1. In a publicly funded organization, non-profit organization, or crowd-funded project (e.g., Wikipedia, NPR, Kickstarter projects, Patreon creators, charities), the investors (or donors) are not investing in profits from the organization, but instead are investing in the product or work the organization does.

      It is interesting to see how these forms of crowd-funding online are generally reserved for information databases and other educational sites. I am curious if such a system could be implemented into major social media sites such as Facebook/Twitter, and if such is even possible under the current economic system.

    1. We could also consider this, in part, a large-scale public shaming of apartheid and those who hurt others through it. Unlike the Nuremberg Trials, the Truth and Reconciliation Commission gave a path for forgiveness and amnesty to the perpetrators of violence who provided their testimony.

      This contrast in group response to atrocity poses interesting questions regarding how the dominant ethical framework of a culture or society influences broader societal perspectives regarding punishment vs. rehabilitation. Within the context of social media shaming, it is arguable that individualistic beliefs regarding the nature of harm are the basis for the general culture surrounding these issues, causing individuals to turn towards specific instances rather than addressing broader patterns collectively.

    1. Another way of considering public shaming is as schadenfreude, meaning the enjoyment obtained from the troubles of others.

      This is an interesting facet of the issue of public shaming on the internet as it relates to other instances of schadenfreude proliferating in online spaces, such as the concept of cringe culture or other social media behaviors built off of negativity and group ostracizing. I am curious as to what extent these behaviors are catalyzed by the nature of social media or exist simply as reflections of innately human behavior that proliferate regardless.

    1. After a company starts working on moderation, they might decide to invest in teams specifically dedicated to content moderation. These teams of content moderators could be considered human computers hired to evaluate examples against the content moderation policy of the platform they are working for.

      Although the existence of these positions is important in the prevention of harmful content proliferating on a given platform, is raises ethical questions regarding the psychological impact this type of job may likely have on those who perform it. Repeated exposure to and examination of content deemed harmful may overtime become traumatizing to those whom moderate it, requiring a heightened sense of comfortability with the content in order to perform their job successfully.

    1. With copyrighted content, the platform YouTube is very aggressive in allowing movie studios to get videos taken down, so many content creators on YouTube have had their videos taken down erroneously.

      This is interesting as it speaks to the role of advertising in regards to social media in its current magnitude and the financial power of corporations over the distribution of their content on these platforms. Large studious are therefore far more likely to have their reposted content efficiently removed at a fast rate than a smaller user with no corporate backing or affiliation.

    1. The incel worldview is catastrophizing. It’s an anxious death spiral. And the solution to that has to be therapeutic, not logical.

      To what extent do online platforms incentivize the proliferation of negative echo-chambers such as these? Due to the nature of social media algorithms, if one expresses interest in content pertaining to negative experiences or poor self image, particularly in relation to an aspect of their identity, it is easier to fall into negative behaviors of trying to validate these thoughts/feelings online. Although platforms may remove certain subcultures they find harmful online, to what extent does the nature of social media make this behavior inevitable?

    1. But Lauren Collee argues that by placing the blame on the use of technology itself and making not using technology (a digital detox) the solution, we lose our ability to deal with the nuances of how we use technology and how it is designed:

      This is interesting as it speaks to the ambiguous nature of popular belief regarding 'wellness' and similar indicators of health and wellbeing. Although digital detoxes and similar efforts to decrease one's time spent online are generally beneficial to one's health, their aimless nature renders them unable to impact one's larger dependence on devices.

    1. Different designs of social media platforms will have different consequences in what content has viral, just like how different physical environments determine which forms of life thrive and how they adapt and fill ecological niches.

      This is an interesting framework through which to analyze the phenomenon of internet virality as it speaks to the subcultural or in-group facet of meme/viral content exchanges. It is particularly interesting to see how these different perspectives add nuance to the initial posts themselves, which may for instance be viewed as ironic in certain circles but shared in earnest elsewhere. It also speaks to the requirement of pre-conceived knowledge or the accumulation of such as memes circulate within certain communities, possessing layers only understood by the given in-group in which the content originates.

    1. Dawkins then took this idea of the evolution of information and applied it to culture, coining the term “meme” (intended to sound like “gene”).

      This use of terminology is interesting as it defines a 'meme' by its broader social evolution and interaction with the public at large. The evolution of a particularly piece of media or concept in this way is thus determined by group consensus, uniquely accessible through the wide reach of social media and the internet at large.

    1. One concern with how recommendation algorithms is that they can create filter bubbles (or “epistemic bubbles” or “echo chambers”), where people get filtered into groups and the recommendation algorithm only gives people content that reinforces and doesn’t challenge their interests or beliefs. These echo chambers allow people in the groups to freely have conversations among themselves without external challenge.

      Although this nature of social media algorithms fosters community in a beneficial manner to users, the equally effective ability of this technology to proliferate hate groups and harmful rhetoric renders it ethically dubious. I am curious as to what extent algorithms have become intrinsic to the nature of social media itself, and although it existed prior to the introduction of algorithmically recommended content, I wonder if given they are theoretically deemed too socially dangerous to continue, if social media could exist without them given the way it currently functions.

    1. They don’t want malicious users to see the algorithm and figure out how to best make their content go viral

      This is an interesting reason for algorithm secrecy within social media sites given the phenomenon on political pipelines being algorithmically supported. I recall hearing of many far right agendas and content proliferating this way, and I am curious as to what extent this is due to the nature of algorithms generally or if social media companies are to blame for fostering these echo chambers.

  2. Oct 2023
    1. In this way of managing disabilities, the burden is put on the computer programmers and designers to detect and adapt to the disabled person.

      This is interesting as different platforms may be innately more accessible by default, so the extent to which extra means of accessibility would have to be implemented for disabled users may alter between platforms. Different disabilities may require different alterations as well, which would have to be taken into consideration by programmers and designers when creating adaptations.

    1. Which abilities are expected of people, and therefore what things are considered disabilities, are socially defined. Different societies and groups of people make different assumptions about what people can do, and so what is considered a disability in one group, might just be “normal” in another.

      In regards to social media and other digital means of communication, I am curious as to which extent the cultural context from which these were created has impacted their accessibility. The extent to which disability and the multitude of ways such may hinder one's ability to use a given site or platform is considered in their creation may fluctuate depending on how prioritized these issues are.

    1. One of the things you can do as an individual to better protect yourself against hacking is to enable 2-factor authentication on your accounts.

      Although this is an effective means of security right now, I am curious as to how this may be surpassed by hackers as technology progresses. Will these means of security be effective if one is able to bypass multiple accounts/devices?

    1. We might not want to be surveilled by a company or government that could use our actions or words against us (whether what we did was ethically good or bad)

      This raises questions regarding which uses of data (pertaining to one's words or actions) are passable in the eyes of the law. In terms of privacy, there is a fine line between when surveillance is necessary for the safety of users and when such impedes upon one's right to privacy.

    1. People in the antiwork subreddit found the website where Kellogg’s posted their job listing to replace the workers. So those Redditors suggested they spam the site with fake applications, poisoning the job application data, so Kellogg’s wouldn’t be able to figure out which applications were legitimate or not (we could consider this a form of trolling). Then Kellogg’s wouldn’t be able to replace the striking workers, and they would have to agree to better working conditions.

      It is interesting to see these technologies utilized to promote societal change by those who do not generally control the means through which they are implemented towards the public. I am curious as to how these dynamics may influence the development of data programming moving forward, and if major socio-economic shifts would have to occur beyond the world of social media/tech in order for actions such as this to have tangible impact.

    1. Additionally, groups keep trying to re-invent old debunked pseudo-scientific (and racist) methods of judging people based on facial features (size of nose, chin, forehead, etc.), but now using artificial intelligence.

      I've noticed a lot of these ideas having a resurgence in popular culture, particularly among younger/adolescent demographics, on social media. Although many express these behaviors with knowledge of the implications and ill intent, it is interesting and concerning how many who would not knowingly agree with the political implications lean into these visual judgements, and I am curious as to what extent algorithms contribute to the proliferation of these behaviors.

    1. Every “we” implies a not-“we”. A group is constituted in part by who it excludes. Think back to the origin of humans caring about authenticity: if being able to trust each other is so important, then we need to know WHICH people are supposed to be entangled in those bonds of mutual trust with us, and which are not from our own crew.

      This idea of 'trolling' as a signifier of an in-group identity raises questions regarding the ethics of the action as it relates to socio-economic status. If a privileged group in society behaves in these way, it is arguably far more reprehensible than if an oppressed group behaved similarly as a means of protest due to the innate power one group may hold.

    1. These trolling communities eventually started compiling half-joking sets of “Rules of the Internet” that both outlined their trolling philosophy:

      Although similar behaviors centered on the torment of others are seen throughout human history, I am curious as to what extent social media and the internet foster these inclinations by creating established spaces in which they are encouraged. It is also possible that these spaces exist tangentially as an oppositional response to social and political progress that generally condemns these unfiltered displays of one's malicious intent. By philosophizing these behaviors, users that openly engage with hateful rhetoric create a sense of community surrounding these ideas that validates their sense of animosity in a manner that is arguably unique to the internet age.

    1. Anonymity can also encourage authentic behavior. If there are aspects of yourself that you don’t feel free to share in your normal life (thus making your normal life inauthentic), then anonymity might help you share them without facing negative consequences from people you know.

      As social media become increasingly integral to the social lives and careers of the general public, it is arguable that this form of social media has become less popular amongst the average user. I am curious as to what extent this reflects a blurring line between the individual and their social media persona as such become increasingly integral.

    1. She also highlights how various “calls to action” (e.g., “subscribe to my channel”) may be necessary for business and can be (and appear) authentic or inauthentic.

      This is an interesting sentiment as it is arguably intrinsic to the profitability social media possesses, especially within the last decade. The motivation to market oneself in order to profit from a social media presence appears unavoidable if one is to progress in this particular niche, which raises ethical concerns regarding what it is one chooses to promote and whether or not they can be held liable for such within the context of social media advertising.

    1. But one 4Chan user found 4chan to be too authoritarian and restrictive and set out to create a new “free-speech-friendly” image-sharing bulletin board, which he called 8chan.

      The proliferation of social media sites campaigning for their interpretation of free-speech, that which centers upon the allowance of unfiltered extremist views and hate-mongering generally coincides with social and political shifts reflecting similar notions in certain demographics of the public. The rise in far-right political ideologies in popular culture may to a certain extent be credited to their spread through sites such as this.

    1. In the mid-1990s, some internet users started manually adding regular updates to the top of their personal websites (leaving the old posts below), using their sites as an online diary, or a (web) log of their thoughts.

      I am curious as to how this entry-oriented development in social media has impacted the relation of users to personal information and the notion of an audience through the internet. Since this implementation, how has the line between internalized thoughts/ideas/opinions and externalized ones been blurred?

    1. When we think about how data is used online, the idea of a utility calculus can help remind us to check whether we’ve really got enough data about how all parties might be impacted by some actions

      I find notion of intentionality and awareness when interacting with social media platforms interesting as in theory it would heighten the social responsibility of platform users. Given the personalized manner of social media algorithms and economic incentive of tech companies, however, I wonder if this ethical framework would be able to truly proliferate in practice. I am also curious as to what extent the notion of 'pernicious ignorance' in the context of social media is shaped by the social and political climate of the world surrounding it.

    1. for a country name (string), have a pre-set list of valid country names

      I am interested in the way this restraint may possess innate biases regarding geographical disputes. This particular restraint reminds me of the way digital maps provided by Google and comparable corporations have the possibility to present politicized visual depictions of the world, particularly during times of conflict.

    1. Psuedocode is intended to be easier to read and write. Pseudocode is often used by programmers to plan how they want their programs to work, and once the programmer is somewhat confident in their pseudocode, they will then try to write it in actual programming language code.

      Due to its reflection of human language, to what extent may the structure of Psuedocode's ability to be easily understood fluctuate amongst written languages? I am curious regarding the lingual context of its development may have influenced its structure, or if so is more so tailored towards computer binaries.

    1. Antagonistic bots can also be used as a form of political pushback that may be ethically justifiable.

      The allowance/presence of bots on social media platforms presents a challenging moral predicament that may greatly fluctuate depending on one's general ethical framework or political perspective. Since social media companies profit from the rapidly increased interaction bots provide, there is little incentive for companies to undermine their proliferation regardless of intention. Due to the generally apolitical state of a bot's rudimentary structure, deciding how to approach their application to pertinent social and political issues reveals a blurred lines between the seemingly neutral nature of the bots themselves and the politically charged actions of those who may utilize them.

    1. Care Ethics began by contrasting the American socially male way of considering ethics, especially valued behaviors in business and government contexts, vs. the American socially female way of considering ethics in relationships, especially in the female-coded spaces of the family and the home.

      I believe that the focus on gendered socialization the Care Ethics framework presents is useful when analyzing the impact of gendered thinking on societal understanding of morality, especially when such historically centers the perspective of men. I disagree with certain elements of this framework pertaining to the notion of 'female-coded' spaces and the proposition that there is an innate difference in ideological perspective on the basis of gender, therefore implying that women are more inclined towards familial relations rather than broader societal interactions.