- Last 7 days
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
11.4.3. Radicalization
Social media recommendation algorithms have been blamed for radicalization by amplifying extreme content and sending people deep into rabbit holes of misinformation and conspiracy theories. Examples include the Pizzagate conspiracy in 2016, when social media surfaced sensationalized claims that politicians were operating a child-trafficking ring from a pizzeria; it culminated in an attack against the pizzeria. The case of the Rohingya genocide, the rise of Flat Earth theories-all these point, once more, to the dangers of algorithms that are aimed at engagement rather than accuracy to nudge people toward harmful beliefs and actions.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Friends or Follows:
Other examples: Finding others with shared interests: If you’re interested in a specific hobby or topic, such as cooking or a particular sport, recommendations for people who share that interest can help you find communities or friends who provide inspiration and camaraderie. Promoting connections that reflect a bias: Algorithms might unintentionally amplify certain types of connections based on biased patterns, like those who are highly active or follow a particular political ideology, which can result in echo chambers that limit diversity of thought and new perspectives.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
In order for these users to still get the information intended from the images, the image can come with alt-text. You can read more about alt-text in this New York Times feature
Alt text is essential for making images accessible to people with visual impairments. Crafting effective alt text depends largely on context: it should highlight the most relevant parts of the image based on its purpose. Additionally, conveying the emotions or mood evoked by the image can enrich the description and provide a fuller experience for the viewer.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
For example:
One example of assistive technology for people with disabilities is screen readers. These are software applications designed to help individuals who are blind or visually impaired interact with digital content. Screen readers convert text on a screen into synthesized speech or braille output, allowing users to navigate websites, applications, and documents independently. Popular screen readers include JAWS (Job Access With Speech) and NVDA (NonVisual Desktop Access) for Windows, and VoiceOver on Apple devices.
-
- Oct 2024
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Hacking attempts
The most well-known example in the hacking of metadata is the 2014 celebrity photo leak, or as most know it, "The Fappening". A number of celebrities were hacked through poor security on their iCloud accounts to reveal private photos. Although the content itself was private, they knew more through metadata in those photos, location data encoded in these photos, for example.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Metadata:
Metadata can sometimes be surprising in the way that it can violate privacy. For example, many photos and messages have embedded location data in them-meaning that others can know where you have been at any given time, and you might never know that such a trail exists. Moreover, the websites collect browsing metadata with the intention of tracking your online activity. Even combining metadata from different sources, companies can create an extremely detailed picture of you without needing to access the actual data.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Additionally, spam and output from Large Language Models like ChatGPT can flood information spaces (e.g., email, Wikipedia) with nonsense, useless, or false content, making them hard to use or useless.
That is a very valid concern. AI-generated content, such as from ChatGPT, tends to spam online platforms like email and Wikipedia with misinformation, making people not trust the platforms. Because Wikipedia, for example, enables users to edit entries, it is highly susceptible to the addition of false information. There are systems in place for moderation, but it's tough to keep up with how quickly AI can generate content. It requires stronger editorial controls and awareness on the part of users to maintain the reliability of such platforms.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Social Media platforms use the data they collect on users and infer about users to increase their power and increase their profits.
I completely agree with this. As TikTok gained popularity with its short videos, many other platforms quickly adopted this feature for creating and sharing short-form content. Instagram introduced Reels, and YouTube launched Shorts, both experiencing significant growth as a result. Even Spotify has now incorporated a similar short video format.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Do not argue with trolls - it means that they win
Many celebrities prefer to let the hateful and trolling comments pass by, or simply ignore them, but a few among them often respond when this crosses a limit. Example: Hailey Bieber recieved trolls, even death threatsand so Selena had responded to in in an Instagram live. Sometimes celebs even troll other celebs and respond via tweets or videos.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Feeling Powerful:
That's an interesting point. In real life also, bullies often bully because it gives them a false sense of power and control. They tend to target individuals they perceive as weaker, both physically and emotionally. By exerting theirdominance, bullies sometimes mask their own vulnerabilities.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Astroturfing
Many growing celebrities, as well as some of my friends, create fake accounts for the sole purpose of following themselves and engaging with their own content like commenting. These accounts are used to like posts, leave positive comments, and make it seem like they have more followers and support than they actually do. It’s a way to boost their popularity. This tactic isn't just limited to celebrities; it's something regular people are starting to do as well to enhance their online image and gain more attention.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
infinite scroll.
Infinite scroll has become a prominent feature on platforms like Instagram (with Reels), YouTube (with Shorts), and TikTok. Other social media platforms are also adopting this trend because it offers a seamless, frictionless user experience. However, it can be addictive, leading to hours of mindless scrolling before realizing that you have homework and its time to lock in.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Internet Relay Chat (IRC)
I believe IRC and Discord are quite similar, despite being social media platforms from different decades. On Discord, like IRC, users can join groups focused on specific topics. Within these groups, there are designated channels for particular discussions. For example, the UW swim club has its own group, and within that group, separate channels like "workouts" or "swim times" help keep conversations organized and clear for everyone.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
lose in simplifying
I had never thought about this in depth but now that ive read it, even texting is very different from real-life interactions, as it lacks many key elements that make face-to-face communication more authentic. The tone and body language, instant feedbacks and the depth of the converstion are all missed out in an online conversation.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
location
This issue concerns me because there's a lack of privacy—your followers instantly know where you are. This is also why many influencers delay sharing their daily routines and locations by a day. It helps prevent stalkers from getting their real-time location among their millions of followers.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Friendly bots
For example, on Discord, there are various optional bots that can be added to a server, and they can be really helpful. Take the "Pancake" bot, for instance—it plays whatever music you instruct it to, making it super convenient and fun when you're hanging out with friends on a server. This is just one example of the many useful bots available on Discord.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Bots and responsibility
I believe the responsibility for a bot's actions lies not with the bot itself, but with the person who created it. It's the coder's design and programming that dictate how the bot behaves. The bot doesnt have a mind of its own to act in a certain way: good or bad. This also points out the complexity of Ethics in computers as the bots intentions and different from its actions.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
We also see this phrase used to say that things seen on social media are not authentic, but are manipulated, such as people only posting their good news and not bad news, or people using photo manipulation software to change how they look.
"Social media isn't real life." Online, we encounter countless filters and beautifying apps that alter appearances—brightening skin, slimming bodies, or enhancing features. Many people present a false version of themselves, not just in how they look but also in other aspects of life, all to attract more attention. For instance, someone might share a vacation photo taken years ago or exaggerate their financial status to appear more successful. As oppose to "people only posting their good news and not bad news", a lot of influences these days only post their bad news and hide the good parts to gain more attentrion or emapthy from the users.
-
- Sep 2024
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Ubuntu#
I found it interesting that the quote "a person is a person through other people" pinpoints and denotes how our identities and social skills are shaped from the interactions we have with other human beings. We don't instinctively know how to connect; we learn from others. What interests me in this idea is that it has shown how community is very important in bringing out the best from ourselves.
-
Consequentialism:
Consequentialism holds that the morality of an action depends on its outcomes. While this is widely accepted, I disagree with its framework. For instance, during the COVID-19 pandemic, testing vaccines on animals was deemed necessary for health and it is along thew line of the "greatest happiness for the greatest number." However, I believe that testing on aminals is unethical.
-