- Last 7 days
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
if you end up in a position to have an influence in tech, we want you to be able to think through the ethical implications of what you are asked to do and how you choose to respond.
I think that one of the most subtle yet impactful decisions is questioning why certain data is collected- what seems like an innocuous field could lead to privacy risks, misuse, or unintended bias. These small choices compound into larger systemic effects, which is why fostering the ability to think critically and stand firm on ethical principles can shape the future of technology in ways that truly serve humanity.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
As a social media user, we hope you are informed about things like: how social media works, how they influence your emotions and mental state, how your data gets used or abused, strategies in how people use social media, and how harassment and spam bots operate.
Beyond personal use, social media is a battleground for influence. Brands, political entities, and activist groups use sophisticated strategies like micro-targeting, sending hyper-specific content to just the right audience to shape opinions or behaviour. It's also a psychological playground where everything from colours to notifications is designed to capture your attention. For instance, the infinite scroll feature is modelled on the principles of slot machines, keeping you engaged with the promise of unpredictable rewards.
-
- Nov 2024
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
n colonialism, one group or country subjugates another group, often imposing laws, religion, culture, and languages on that group. In this case, Zuckerberg and Meta are imposing their version of the Internet on people around the world. In particular, when Zuckerberg offers free Internet, it only comes with access to a few sites, such as Wikipedia, and of course Facebook. So Zuckerberg is choosing what part of the Internet people get access to. And while the people might gladly accept this deal, the bargain is being made by two people in very unequal positions, and Zuckerberg has almost complete freedom to set the terms of the deal.
The comparison between Zuckerberg's control over Internet access and colonialism is both fascinating and unsettling because it highlights the immense power imbalance at play. In a sense, Zuckerberg and Meta are exercising a form of digital imperialism. Where, rather than the traditional colonial model of military conquest or territorial control, this modern form of domination revolves around data and access.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
The tech industry is full of colonialist thinking and practices, some more subtle than others. To begin with, much of the tech industry is centralized geographically, specifically in Silicon Valley, San Francisco, California. The leaders and decisions in how tech operates come out of this one wealthy location in a wealthy nation.
In some cases, tech companies extract data, resources, or labor from poorer countries without sharing the full benefits of these activities. For example, the production of tech hardware in factories across the global South, or the collection of personal data from users in countries with limited data protection laws, can be seen as a form of exploitation reminiscent of colonial resource extraction.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
As many are trying to get women into programming, so that they aren’t cut out of profitable and important fields, Amy Nguyen warns that men might just decide that programming is low status again (as has happened before in many fields): The history of women in the workplace always tells the same story: women enter a male-dominated profession, only to find that it’s no longer a respectable field. Because they’re a part of it, so men leave in droves. Because women do it, and therefore it must not be important. Because society would rather discredit an entire profession than acknowledge that a female-dominated field might be doing something that actually matters.
The idea that when women enter a profession, it becomes devalued, is not just unfair. It is deeply rooted in systemic gender biases that have existed for centuries. This dynamic, which Amy Nguyen points out, is a stark reflection of how society often fails to acknowledge the value and legitimacy of women's contributions, particularly in fields that have historically been male-dominated. The suggestion that programming, or any field, becomes less "respectable" simply because women are participating in it, speaks to a disturbing pattern of gender-based devaluation.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
To increase profits, Meta wants to corner the market on social media. This means they want to get the most users possible to use Meta (and only Meta) for social media. Before we discuss their strategy, we need a couple background concepts:
The concept of the network effect is fascinating to me because it creates a powerful feedback loop that can make or break a platform. For companies like Meta, this effect is both their greatest advantage and a high-stakes challenge. The more people who join Meta's platforms, the more valuable and engaging those platforms become due to increased content, interactions, and opportunities for social connections
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
When we think about repair and reconciliation, many of us might wonder where there are limits. Are there wounds too big to be repaired? Are there evils too great to be forgiven? Is anyone ever totally beyond the pale of possible reconciliation? Is there a point of no return?
What I find fascinating is how these questions push us to reflect on human nature and our capacity for growth, justice, and mercy. Is reconciliation about the person who caused harm, or is it about the person who was harmed finding peace? The idea of a "point of no return" might exist for some, but others might see every wound as holding at least a small possibility for repair, even if it takes generations to heal. This tension between the impossibility and the hope of reconciliation is what makes it such a profound and challenging process.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
under what conditions public shaming would be morally permissible
My view is that Public shaming may be morally permissible when it forms a mode of accountability, not possible by any other means, when there is harm caused in such a manner that the offending individual or entity holds much power or influence. For example, public shaming can be justified when it does something like expose systemic abuses, such as corporate exploitation or political corruption, especially in the context where traditional channels of accountability-like courts or regulatory bodies-are unreachable, ineffective, or compromised.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Gamergate was a harassment campaign in 2014-2015 that targeted non-men in gaming: Zoë Quinn, Brianna Wu, and Anita Sarkeesian. The harassment was justified by various false claims (e.g., journalistic malpractice), but mostly motivated by either outright misogyny or feeling threatened by critiques of games/gaming culture from a not straight-white-male viewpoint
I found it interesting how the campaign also exposed systemic issues in how platforms handle harassment, demonstrating the inadequacy of reactive moderation systems in the face of collective and sustained online abuse. Gamergate wasn't just about gaming; it was a cultural flashpoint that highlighted the broader struggles over representation, equity, and the evolution of digital communities.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Harassment in social media contexts can be difficult to define, especially when the harassment pattern is created by a collective of seemingly unconnected people. Maybe each individual action can be read as unpleasant but technically okay. But taken together, all the instances of the pattern lead up to a level of harm done to the victim which can do real damage.
Sometimes, when harassment emerges from a collective of seemingly unconnected people, individuals feel less accountable for their actions because they perceive themselves as just one small part of a larger group. This makes it difficult to address the harm effectively through individual-focused frameworks like reporting or moderation systems that target single users.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
When looking at who contributes in crowdsourcing systems, or with social media in generally, we almost always find that we can split the users into a small group of power users who do the majority of the contributions, and a very large group of lurkers who contribute little to nothing.
Lurkers are often critical to the sustainability of crowdsourcing systems. Although they may not contribute directly, they still play an important role by observing and absorbing information. This is especially true in knowledge-sharing platforms like Wikipedia, where lurkers may read and learn from contributions before deciding to engage themselves.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Social media crowdsoucing can also be used for harassment, which we’ll look at more in the next couple chapters. But for some examples: the case of Justine Sacco involved crowdsourcing to identify and track her flight, and even get a photo of her turning on her phone.
Crowdsourcing on social media can be a powerful tool but it also has a dark side, especially when used for harassment or public shaming. Regarding the Justine Sacco case, once Sacco landed and turned her phone on, she was bombarded with messages and notifications. Social media users were waiting for her arrival, and some even took photos of her at the airport as she walked off the plane. This moment, once she was fully exposed to the scale of the outrage, became part of the public spectacle, amplifying the shaming.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
They have to get through many posts during their time, and given the nature of the content (e.g., hateful content, child porn, videos of murder, etc.), this can be traumatizing for the moderators:
Moderators often face moral dilemmas when deciding whether to remove or allow content. For instance, they may struggle with the idea of preserving free speech while also ensuring the platform remains a safe space for users. Deciding what constitutes harmful content, versus something that might be considered offensive but is still within the boundaries of free expression, is a complex task that requires careful judgment.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Letting individuals moderate their own spaces is expecting individuals to put in their own time and labor. You can do the same thing with larger groups and have volunteers moderate them. Reddit does something similar where subreddits are moderated by volunteers, and Wikipedia moderators (and editors) are also volunteers.
When individuals or volunteer moderators are left to set and enforce their own rules, it can lead to inconsistent moderation practices. What one moderator finds acceptable, another might find unacceptable, leading to confusion and frustration for users. Volunteers may also have personal biases, which can result in uneven or unfair treatment of members, especially if certain groups or opinions are favoured over others. This can create an environment of inequality or discrimination.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Sites like 4chan and 8chan bill themselves as sites that support free-speech, in the sense that they don’t ban trolling and hateful speech
On one hand, these platforms give a voice to alternative and often marginalised perspectives, creating subcultures that might otherwise be silenced or ignored. On the other hand, they have also served as breeding grounds for harmful content, wherein some sections of the same communities turn towards extremism or actions that are harmful in the real world.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Social media platforms themselves have their own options for how they can moderate comments, such as:
I find it interesting that the choices the platforms make in designing these moderation tools impact not only the tone of the discussions but also the kind of communities that will thrive. For example, on platforms that grant more powers to individual users, discussions are more civil because users filter out harassment or negativity. With less moderation, open, and unfiltered speech might permit many comments that are damaging or misleading.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Doomscrolling is:
I have never actually heard of this term before but the concept of doom scrolling can have a huge impact on one's mental health. It can feel deceptively productive, as if we’re staying informed or preparing ourselves by absorbing more information. In reality, though, it often leads to feelings of helplessness and exhaustion, as there’s only so much one person can process.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
For example, Facebook has a suicide detection algorithm, where they try to intervene if they think a user is suicidal
Although this is a prime example of how artificial intelligence can be used to support mental health, critics argue that users might not realise their posts are being analysed for mental health signals, and some question the transparency of Facebook’s data collection practices. Additionally, concerns exist about false positives or unintended consequences of intervention- such as emergency services showing up unexpectedly at a user’s home based on algorithmic analysis.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
When is it ok to not cite sources for content?
In fiction, poetry, or a song, the artists seldom provide a long list of all of those from whom they drew inspiration. Art can evoke emotions and ideas without always being upfront or clear about where the idea originally comes from. The line between influence and plagiarism is often thin. Although many creative works organically emerge from preexisting ideas or styles, disputes arise when the lines of "inspiration" and "originality" are not well demarcated.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
When someone creates content that goes viral, they didn’t necessarily intend it to go viral, or viral in the way that it does.
I find it interesting how sometimes, heartfelt or sincere content made at times gets misconstrued for satire and even for meme purposes. A creator might have shared a very personal story or statement to inspire or educate, but it ends up becoming humorous content for others, or sometimes parodied. This misinterpretation either helps the creators by increasing their reach or, on other occasions, may dilute their message or even invite negative attention.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Knowing that there is a recommendation algorithm, users of the platform will try to do things to make the recommendation algorithm amplify their content
One fascinating thing is the "engagement bait" strategy, where users create content for the specific purpose of eliciting a reaction. For example, posts that beg people to "like," "share," or "comment" on their post are legion, since all of these actions signal to the algorithm that this is engaging content that deserves more visibility. Similarly, some users make a conscious decision to use trending keywords, hashtags, or even to adopt a content style similar to other popular posts as a way to manufacture viral characteristics rather than create authentic or original content.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
recommendation algorithms are rules set in place that might produce biased, unfair, or unethical outcomes
I find it interesting that one of the more subtle issues of recommendation algorithms is the fact that they create "filter bubbles" through the constant suggestion of similar content, reinforcing the very views already held by a user. In fact, this decreases the likelihood of exposure to diverse ideas and further polarizes individuals by showing them only what agrees with their beliefs. As this process repeats over time, it feeds into the echo chamber effect, in which misinformation or one-sided perspectives are amplified with the intention of driving public opinion and social behavior in ways not intended.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Additionally, people with disabilities might change their behavior (whether intentionally or not) to hide the fact that they have a disability, which is called masking and
Masking can also be an act of agency, giving people a sense of control about how they are perceived and enabling them to determine if and when this aspect of identity is revealed. On a social level, masking can be one way to avoid intrusive questions or unwanted sympathy, allowing them to be seen and valued for their personality and their contributions rather than through the lens of a diagnosis.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
but it is worth reiterating again here that design justice includes considering which groups get to be part of the design process itself.
I think that it is essential that inclusivity is fostered not just in the outcomes of the design, but the design process itself. It is vital that we consider whose voices are centred and whose lived experiences shape the design. When we consider designing with-and not just for-users and include actively marginalized and impacted communities, we actually build systems and products that reflect diversified needs and values.
-
- Oct 2024
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
This includes the creation of Shadow Profiles, which are information about the user that the user didn’t provide or consent to
I think that the concept of 'shadow profiles' presents a significant challenge to privacy regulations and the ability to monitor whether data on media is kept as secure as possible. It may also in some ways be categorised as a result of misinformation. For example, a user may decline services on a media keeping in mind that they don't want to be a part of the company's data collecting practices however their data is still gathered in the system which can be deemed unethical.
-
Sometimes companies or researchers release datasets that have been “anonymized,” meaning that things like names have been removed, so you can’t directly see who the data is about. But sometimes people can still deduce who the anonymized data is about. This happened when Netflix released anonymized movie ratings data sets, but at least some users’ data could be traced back to them.
I have come across multiple research papers regarding de-identification of personal information and what is so interesting as well as a bit concerning is the idea that any individual can be identified by only 3 pieces of information including zip code, birthday and gender. This discovery did lead to a change in policies regarding privacy research and regulations on various media; however this leads me to think that as privacy protections evolve, so will the methods to breach them.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Sometimes a dataset has so many problems that it is effectively poisoned or not feasible to work with.
I think data poisoning can have severe consequences which are often overlooked including the idea that misleading information and drastically reduced accuracy can lead to big companies or brands especially on social media making big decisions leading to losses and failures.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Data mining is the process of taking a set of data and trying to learn new things from it.
I find it fascinating that data mining specifically regarding social media data can be used to determine public opinion by obtaining emotions, ideas and motives behind posts. In which case this application of data mining can be applied to the social media aspect of brands when they want to gauge their audience's interests.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
What are the potential benefits of this example (e.g., it’s funny, in-group identifying)? And who would get the benefits?
Trolling can pose some benefits in some ways including if it is used against users who promote hate speech or cyber bullying online. Not only does it stand up against those who misuse the internet with malicious intent, it can also be funny fostering community and camaraderie.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
In the early Internet message boards that were centered around different subjects, experienced users would “troll for newbies” by posting naive questions that all the experienced users were already familiar with. The “newbies” who didn’t realize this was a troll would try to engage and answer, and experienced users would feel superior and more part of the group knowing they didn’t fall for the troll like the “newbies” did. These message boards are where the word “troll” with this meaning comes from.
What I find interesting is the evolution of trolling on the internet from something which is playful and a mischievous activity to malicious and harmful in many ways. This is to do with the change in the intention of the people who troll. Back in the day I believe that the intention was to amuse themselves or confuse people on the internet rather than to harm or harass.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Parasocial relationships are when a viewer or follower of a public figure (that is, a celebrity) feel like they know the public figure, and may even feel a sort of friendship with them, but the public figure doesn’t know the viewer at all.
What I find intriguing is the impact of parasocial relationships on the viewers. Viewers often develop a strong connection with the influencer which frequently grows deep enough to the point where it influences their individual values and beliefs, and sometimes, their actions. One may argue parasocial relationships are inauthentic due to the fact that it is an illusion created resulting in a one sided relationship. There is no genuinely and vulnerability since the influencers often display a curated persona to the media.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
We value authenticity because it has a deep connection to the way humans use social connections to manage our vulnerability and to protect ourselves from things that threaten us
I think this is a really interesting point as to why authenticity is highly valued. Regarding authenticity as an aspect of individual personality, we also value the sense of integrity which comes with it. Especially when one's actions align with their values and beliefs, they are respected for their authenticity.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
In the first decade of the 2000s the way websites worked on the Internet went through a transition to what is called “Web 2.0.”
An interesting fact I came across is that the Web 2.0 used chronological feeds where posts would come up on user's feed in the order of recency whereas social media today uses algorithmic feeds which is based on the user's interests, activity and interactions on the social media.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
there was a parallel growth of social media platforms that were based on having “no rules”,
This is the first time that I'm hearing of such platforms and it defies the idea that social media always reinforces connectivity and is mostly beneficial in some way or the other. This also brings into question how although media similar to this creates high levels of toxicity, misinformation, manipulation and more, they still continue to exist for various reasons such as revenue earned from the platforms, the right to free speech and more which I personally find extremely unethical.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
The Dictionary data type allows programmers to combine several pieces of data by naming each piece.
I am not very familiar with this data type but I find it fascinating how it can store a variety of different data types and the efficiency it offers when it comes to looking up pieces of information from a big list.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Metadata is information about some data. So we often think about a dataset as consisting of the main pieces of data (whatever those are in a specific situation), and whatever other information we have about that data (metadata)
I think that the importance of metadata and the contextual power it holds is not often recognised. It adds another layer of depth to a post by including background information regarding the post. In addition, there is a sense of ownership of the post which is included as a part of metadata. However through a different perspective, it can also be deemed controversial as it is to some extent quite intrusive as it does expose user location, movements, behavioural insights and time stamps which a lot of users may not approve of.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Why do you think social media platforms allow bots to operate?
Bots have proven to be useful and beneficial in social media platforms in many different ways. One of them is the fact that they enable routine tasks to be automated. For example, there will always be the need for queries or questions in general to be responded to and the utilisation of bots makes this process much easier in the sense that there is no requirement for humans to take time to read, understand and come up with a suitable response to a huge number of questions or inquiries.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
We also would like to point out that there are fake bots as well, that is real people pretending their work is the result of a Bot.
I didn't really understand the purpose or use of fake bots initially but I came to the realisation that in some ways they are a form of deception and manipulation and may be used to bring certain things to attention while also maintaining a sense of anonymity; hence, the human behind it does not have to take any accountability. I found this quite interesting.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Act with unforced actions in harmony with the natural cycles of the universe. Trying to force something to happen will likely backfire. Rejects Confucian focus on ceremonies/rituals. Prefers spontaneity and play.
I am a very strong believer of individualism in the aspect that in order to fulfil any specific purpose or prioritising individual desires requires you to go against the natural flow and harmony of the situation. In this case, forcing something to happen is the only way to achieve an objective or a greater impact. Furthermore, we are bound to face resistance and struggles in life which lead to our personal growth and achievements. Hence in these scenarios I believe that Taoism does not fully account for overcoming challenges and staying true to yourself.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Why did so many people see it? How did it spread?
What I find interesting is the fact that not only do things go viral and trend due to their emotional appeal, relatability or whether it gives rise to controversy and debate acting as a social and/or cultural trigger, but the idea of 'social currency'. Often, things that go viral on a platform like twitter either add to, or take away from one's social status in an extreme manner. In this case, given the time period, Justine's tweet touched on highly charged topics at that time involving sensitive issues related to race and priviledge in society. That tweet sparked controversy and took away from her social status resulting in the loss of her job in that period of time.
-