- Last 7 days
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
12.1.2. Memes
Exploring memes within the framework of cultural evolution highlights how ideas propagate similarly to genes, but with a captivating difference—memes are influenced by human intention. In contrast to biological evolution, which relies on random mutations, memes are frequently designed and modified by people. This comparisons creates a rich interplay between natural selection and human ingenuity, allowing for smoother adaptation and even 'directed evolution' of concepts. This characteristic empowers memes to adjust almost immediately to social trends and changes in collective awareness.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
11.3.1. How recommendations can go well or poorly
These examples further reveal the complexity of algorithmic recommendations, balancing between beneficial and hurtful outcomes. Whereas recommendations that link users to new friends, or ads revealing relevant content can increase engagement and satisfaction, recommendations can easily go awry when they surface sensitive content or remind users of traumatic events, therefore linking them with unethical people. This further underscores the integration of context awareness and ethics within algorithms to avoid causing potentially distressing experiences. The secret to responsible recommendation practices probably lies in a good balance between personalization and sensitivity to user well-being.
-
Knowing that there is a recommendation algorithm, users of the platform will try to do things to make the recommendation algorithm amplify their content. This is particularly important for people who make their money from social media content. For example, in the case of the simple “show latest posts” algorithm, the best way to get your content seen is to constantly post and repost your content (though if you annoy users too much, it might backfire).
Strategies that depend on the posting of content constantly to increase visibility come with the consequence of a 'quantity over quality' approach. This could be very destructive towards the content ecosystem of social media platforms. In fact, creators are under pressure with the need to post frequently in order to stay relevant, which might make them compromise on the depth and authenticity of the content. Besides impacting the mental well-being of creators, this could also lead to audience fatigue, where followers start disengaging from such posts due to the repetition or overwhelming feeling of them.
-
- Oct 2024
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Another way of managing disabilities is assistive technology, which is something that helps a disabled person act as though they were not disabled. In other words, it is something that helps a disabled person become more “normal” (according to whatever a society’s assumptions are). For example:
This section points out the very complex issues in assistive technology-between empowerment and the compulsion of making the disabled person "normal." As much as glasses and wheelchairs can offer independence, this reflects an ableism in how disabled people should adapt to the able-bodied world. The fixation on "fixing" the person, rather than bettering accessibility, can be emotionally exhausting and engender a sense that disability is something not to be. Moreover, assistive technologies are too expensive, a factor that acts as a barrier to access by the very individuals who need them, therefore raising equity concerns. Practices ranging from ABA therapy to conversion therapy evoke the dangers when interventions prioritize normalization over acceptance, sometimes with severe harm. The time is ripe to turn the tide of the conversation in favor of diversity and include systems that promote all abilities rather than dictate that people with disabilities fit into narrow definitions of functionality. True inclusion is about changing society's attitude and environment, not changing people.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Hacking attempts can be made on individuals, whether because the individual is the goal target, or because the individual works at a company which is the target. Hackers can target individuals with attacks like:
This really emphasizes that cybersecurity goes beyond just technology; it also involves human behavior. People often reuse passwords for convenience, unaware of how easily that habit can be exploited. It’s both fascinating and a bit frightening how trust can be manipulated—take the example of the NSA impersonating Google. Social engineering serves as a perfect reminder that hackers don’t always need sophisticated tools; sometimes, they just need to deceive people into trusting the wrong thing. Phishing emails and fake QR codes are particularly clever because they depend on people acting quickly without thinking. The reference to Frank Abagnale from Catch Me If You Can re
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Datasets can be poisoned unintentionally. For example, many scientists posted online surveys that people can get paid to take. Getting useful results depended on a wide range of people taking them. But when one TikToker’s video about taking them went viral, the surveys got filled out with mostly one narrow demographic, preventing many of the datasets from being used as intended.
This poisoning of the dataset is going to connect with the Deepfake issue in Korea, as data manipulation and biased inputs have severe consequences in terms of outcomes. Applications in deepfakes rely on enormous datasets of images and videos that feed into algorithms that generate realistic fake content. If these datasets are contaminated with biased or skewed data-intentional or unintentional-such as overrepresentation within certain demographics, then the outcomes will be very problematic. On one hand, it is similar that viral survey participation undermines data reliability, while deepfake datasets poisoned with biased material lead to applications considered very unethical or harmful, such as creating non-consensual videos or misinformation. In fact, these two examples drive the message home regarding the risk involved with unregulated data inputs in critical systems.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
One particularly striking example of an attempt to infer information from seemingly unconnected data was someone noticing that the number of people sick with COVID-19 correlated with how many people were leaving bad reviews of Yankee Candles saying “they don’t have any scent” (note: COVID-19 can cause a loss of the ability to smell):
This is really creative in leveraging unusual data to find a trend. Who would have imagined that COVID-19 cases are correlated with negative reviews of Yankee Candle-a proxy for not being able to smell, a known symptom-from product reviews? Indeed, an interesting case of how sometimes unrelated streams of data can point to a pattern or predict a trend in public health. It also underlines that any data correlations have to be subject to critical examination in order not to jump to misleading conclusions, as in the case of negative reviews, the reasons could be altogether different. This is a good example of how strong and/or limited data interpretation can be in a real-life setting.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Does anonymity discourage authenticity and encourage inauthentic behavior?
Anonymity can indeed impede inauthenticity and even falsification of actions. Therefore, they start to exhibit behavior that would normally be against their self-interest even in identifiable settings. This reduced sense of moral judgement might lead people to act unethically and untruthfully in the society. This is because in a case where individuals do not operate under any rules, expectations or standards of society when in a crowd, only self will is the standard to use any behavior or speech with no fears of being put at risk either social or in any other way. Similarly, anonymity may also produce the opposite effect consequently. It can actually provoke a person's honesty in communication with others as there is no fear of being misjudged. The real concern is in those places where the pressure is intense and consequently, there are greater motives for faking.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Separately, in 2018 during the MeToo movement, one of @Sciencing_Bi’s friends, Dr. BethAnn McLaughlin (a white woman), co-founded the MeTooSTEM non-profit organization, to gather stories of sexual harassment in STEM (Science, Technology, Engineering, Math). Kyle also followed her on Twitter until word later spread of Dr. McLaughlin’s toxic leadership and bullying in the MeTooSTEM organization (Kyle may have unfollowed @Sciencing_Bi at the same time for defending Dr. McLaughlin, but doesn’t remember clearly).
Authenticity is crucial in maintaining engagement and trust in online communities, and when that trust is broken, it can lead to a negative impact on one's social presence. Especially in this circumstance seeing a courageous and inspiring character fall down like this. It intended to help people who are being oppressed but it can't help with this loss of authenticity.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Designers sometimes talk about trying to make their user interfaces frictionless, meaning the user can use the site without feeling anything slowing them down.
Yes, that is important when navigating the need to create a program that is 'user-friendly." Users can navigate more efficiently when an interface clearly communicates how it should be interacted with could reduce errors and frustrations significantly. Strong feature support promotes product success by facilitating ease of use and increasing user satisfaction, which is key in today’s competitive digital environment.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Can you think of an example of pernicious ignorance in social media interaction? What’s something that we might often prefer to overlook when deciding what is important?
An example of pernicious ignorance in social media often appears when people share misinformation without recognizing its harmful effects, such as reinforcing stereotypes. Users may overlook the impact of spreading biased content, focusing instead on gaining likes or engagement. This could have detrimental effects of neglecting the ethics.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Ethics: Thinking systematically about what makes something morally right or wrong, or using ethical systems to analyze moral concerns in different situations
Therefore, it refutes the idea of "Machine makes more objective decisions than human beings." While machines may process data without human emotions or biases, the reality is that these machines rely on programming codes and algorithms designed by humans. This means that any biases, assumptions, or limitations present in the programmer's thinking can be embedded into the machine's decision-making process.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Bots present a similar disconnect between intentions and actions. Bot programs are written by one or more people, potentially all with different intentions, and they are run by others people, or sometimes scheduled by people to be run by computers.
This could happened to be ethical dilemmas because of the division of responsibility: It can take on new roles or produce results that were initially unanticipated. This complexity shows how the lines between human intent and machine behavior can blur in an automated environment.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Bots, on the other hand, will do actions through social media accounts and can appear to be like any other user. The bot might be the only thing posting to the account, or human users might sometimes use a bot to post for them.
that could be scary in some circumstances because it could replace human beings at any time, The cybersecurity issue seems going to be strengthened due to the distinction between true and false information becoming more difficult and may even encourage the spread of false information.
-