- Last 7 days
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
One concern with how recommendation algorithms is that they can create filter bubbles (or “epistemic bubbles” or “echo chambers”), where people get filtered into groups and the recommendation algorithm only gives people content that reinforces and doesn’t challenge their interests or beliefs. These echo chambers allow people in the groups to freely have conversations among themselves without external challenge.
However, the statement also recognizes that not all filter bubbles are undesirable. For instance, places where they feel protected or validated may be beneficial for fan communities or marginalized groups. Filter bubbles can be safe spaces for oppressed communities, providing a sense of belonging and solidarity in otherwise marginalized surroundings, free from harassment or antagonism. Overall, the paragraph highlights that filter bubbles are a double-edged sword: they can reinforce harmful views, but they can also foster groups that help those with similar vulnerabilities or interests.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Similarly, recommendation algorithms are rules set in place that might produce biased, unfair, or unethical outcomes. This can happen whether or not the creators of the algorithm intended these outcomes. Once these algorithms are in place though, the have an influence on what happens on a social media site. Individuals still have responsibility with how they behave, but the system itself may be set up so that individual efforts cannot not be overcome the problems in the system.
This brings up important issues about algorithmic accountability, which requires developers to foresee and correct any potential biases in the systems they design. Additionally, it emphasizes the significance of systemic intervention since user-level actions (such as flagging harmful information or engaging in mindful media consumption) would not be sufficient to address widespread algorithmic problems. More attention should be paid to algorithmic transparency, frequent bias audits, and developing systems that support equity in order to overcome these issues and make sure that recommendation algorithms don't disfavor particular groups or reinforce negative trends.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
In how we’ve been talking about accessible design, the way we’ve been phrasing things has implied a separation between designers who make things, and the disabled people who things are made for. And unfortunately, as researcher Dr. Cynthia Bennett points out, disabled people are often excluded from designing for themselves, or even when they do participate in the design, they aren’t considered to be the “real designers.” You can see Dr. Bennet’s research talk on this in the following Youtube Video:
The division between designers and handicapped people—who are frequently viewed as passive recipients rather than active creators—is a problem in accessible design that is brought to light in this paragraph. The framing of design discussions frequently suggests that non-disabled designers are the creators and handicapped people are just users. According to Dr. Cynthia Bennett, it is problematic when disabled persons are excluded from the design process because it ignores their lived experience and knowledge of their own needs. The opinions of disabled people may not be given the same weight as those of "professional" designers, even when they do contribute. This highlights a problem in many domains where underrepresented groups are viewed as recipients rather than collaborators, which can result in designs that don't completely meet their needs or reflect their perspectives.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Another way of managing disabilities is assistive technology, which is something that helps a disabled person act as though they were not disabled. In other words, it is something that helps a disabled person become more “normal” (according to whatever a society’s assumptions are). For example:
Because it perpetuates the notion that disability is something to be "fixed" rather than accommodated, this emphasis on normalcy can be harmful. The high costs of many assistive devices are also mentioned in the text, which can be a barrier for people who require them and reflect larger problems of unequal access to help. By highlighting how attempts to make people "normal" can occasionally backfire, the mention of abusive practices like gay conversion therapy or ABA therapy for autistic persons broadens the discussion.
-
- Oct 2024
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
# Some governments and laws protect the privacy of individuals (using a Natural Rights ethical framing). These include the European Union’s General Data Protection Regulation (GDPR), which includes a “right to be forgotten”, and the United State’s Supreme Court has at times inferred a constitutional right to privacy.
One excellent example is the European Union's GDPR(General Data Protection Regulation), which protects privacy as a fundamental right. The "right to be forgotten," which enables people to ask for the removal of personal information, is one noteworthy clause that reflects the idea that people should have control over their information in the digital age. However, the U.S. lacks the comprehensive, overarching structure found in the EU, and privacy rights are frequently sector-specific (e.g., health, financial).
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Deanonymizing Data:
Because it shows how businesses might produce new information on users without their knowledge or agreement, the idea of inferred data—such as shadow profiles made by social media companies—is worrisome. This type of privacy violation frequently takes place in the background and is challenging for users to keep an eye on or manage. Furthermore, Facebook's collection of information on users without accounts demonstrates how widespread data tracking has become. The broad ramifications of contemporary data surveillance are demonstrated by the fact that even non-users' information can be collected without their agreement.Because it shows how businesses might produce new information on users without their knowledge or agreement, the idea of inferred data—such as shadow profiles made by social media companies—is worrisome.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Then Sean Black, a programmer on TikTok saw this and decided to contribute by creating a bot that would automatically log in and fill out applications with random user info, increasing the rate at which he (and others who used his code) could spam the Kellogg’s job applications:
Sean Black's employment of a bot to automate the spamming process draws attention to the contentious relationship between activism and technology. It also shows how data manipulation and programming may be utilized as a form of protest. The use of fictitious employment applications or other forms of data poisoning highlights how dependent businesses are on reliable, clean data to run efficiently. This gives those who want to upend systems new chances, but it also portends a time in the future when these kinds of strategies might be applied unethically or destructively.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Social media sites then make their money by selling targeted advertising, meaning selling ads to specific groups of people with specific interests. So, for example, if you are selling spider stuffed animal toys, most people might not be interested, but if you could find the people who want those toys and only show your ads to them, your advertising campaign might be successful, and those users might be happy to find out about your stuffed animal toys. But targeting advertising can be used in less ethical ways, such as targeting gambling ads at children, or at users who are addicted to
The paragraph draws attention to targeted advertising, which is a key component of social media monetization. Social media companies gather a tonne of user data, which they then sell to companies as highly targeted advertising opportunities. Ads are more likely to be seen by users who are most inclined to interact with them, thanks to this technique. The passage does, however, also highlight the moral dilemmas raised by tailored advertising. It can be advantageous for companies as well as customers, but it can also be abused.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
7.3.5. Flooding Police app with K-pop videos
This paragraph offers an insightful example of trolling as a form of protest. This tactic, often called data smog, has a rich history in online communities. This was not an isolated incident but part of a larger trend in digital activism where communities use disruption to defend vulnerable groups or subvert authority.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
7.4. Responding to trolls?
Ignoring trolls hinges on the assumption that when trolls seek attention, but they'll lose interest when they are ignored. While this may be true for sometimes, ignoring continual harassment doesn't solve the underlying problem, especially when trolls escalate their behavior to force a reaction. It is insufficient in more serious cases of online harassment. This implies that if harassment continues, it’s due to the victim’s failure to “manage” their harasser.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
6.6.3. Is authentic self-expression good?#
It's important to think about the circumstances in which this genuine behavior occurs. People can express their "true" opinions and sentiments in an anonymous setting, but there may be no real repercussions. In a more accountable, face-to-face setting, impulsive, raw emotion may not always be indicative of an individual's basic principles, and the lack of responsibility may make it difficult to distinguish between the two types of self-expression.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Another phenomenon related to authenticity which is common on social media is the parasocial relationship. Parasocial relationships are when a viewer or follower of a public figure (that is, a celebrity) feel like they know the public figure, and may even feel a sort of friendship with them, but the public figure doesn’t know the viewer at all. Parasocial relationships are not a new phenomenon, but social media has increased our ability to form both sides of these bonds. As comedian Bo Burnham put it: “This awful D-list celebrity pressure I had experienced onstage has now been democratized.”
The explanation of parasocial relationships impressed me the first time since it could explain why viewers feel like they "know" the public figure. Also, I feel surprised about that parasocial relationships appears from intimate-seeming interactions over time, often via media that simulate personal engagement. But in my perspective, this interactivity blurs the line between public and personal.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
One classic example is the tendency to overlook the interests of children and/or people abroad when we post about travels, especially when fundraising for ‘charity tourism’. One could go abroad, and take a picture of a cute kid running through a field, or a selfie with kids one had traveled to help out. It was easy, in such situations, to decide the likely utility of posting the photo on social media based on the interest it would generate for us, without thinking about the ethics of using photos of minors without their consent. This was called out by The Onion in a parody article, titled “6-Day Visit To Rural African Village Completely Changes Woman’s Facebook Profile Picture”.
This passage is something I never envisioned, regarding some of the important issues surrounding the ethics of social media interactions. The example is that people ignore the rights and dignity of the people in their photos, often using their situation as a backdrop for our self-promotion or fundraising. The curated nature of social media inherently simplifies reality, and a question arises: how can we be more mindful of these simplifications and their potential impact on the people and issues we represent online?
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
When looking at real-life data claims and datasets, you will likely run into many different problems and pitfalls in using that data. Any dataset you find might have: missing data erroneous data (e.g., mislabeled, typos) biased data manipulated data Any one of those issues might show up in Twitter’s claim or Musk’s counterclaim, but even in the best of situations there is still a fundamental issue when looking at claims like this, and that is that: All data is a simplification of reality.
This list writes about the challenges that society faces nowadays when dealing with real data. If not handled properly, missing data can lead to biased analyses, and biased data can reinforce pre-existing stereotypes or erroneous conclusions. Incorrect data can appear in mislabeled posts or bot-generated content being included in the analysis as real user interactions. So how do we assess the reliability of data in public discussions?
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Bots and responsibility
With the rapid development of the robotics era, the ethical dimension needs to be emphasized. If a robot's behavior has moral irregularities or even harm, who will be held accountable. Will it be the programmer, the organization, or the robot? These are all very vague, which leads to boundaries and difficulties in holding people accountable.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
On the other hand, some bots are made with the intention of harming, countering, or deceiving others. For example, people use bots to spam advertisements at people. You can use bots as a way of buying fake followers, or making fake crowds that appear to support a cause (called Astroturfing).
This is an increasingly critical issue and one that deserves my attention. The most obvious, and the one I feel closest to, is the manipulation of public opinion in this society. It can completely distort the perception of true public opinion, but political movements, social movements or corporate reputations, all of these require transparent, organic public participation. It's completely toxic to society.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Intervene
The fact that a Confucian thinker might intervene in this situation strikes me as interesting and unifying. Prioritizing their filial and benevolent obligations, ensuring that parents receive the necessary care. Even if it costs the parents money and goes against their current wishes. This decision will be made with respect, compassion and concern for the parents, with the long-term goal of maintaining harmony within the family.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Rejects Confucian focus on ceremonies/rituals. Prefers spontaneity and play.
Confucians may criticize Daoism for its ideal of individualism as being out of touch with society. Daoists, on the other hand, may consider Confucian life to be too rigid and burdensome, and out of touch with the natural world. Despite these disputes, Confucianism and Daoism were not always mutually exclusive. Historically, many Chinese people followed both philosophies, following Daoist principles in their personal and spiritual lives and Confucian thinking when it came to social and family obligations.
-