- Last 7 days
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Social media crowdsoucing can also be used for harassment, which we’ll look at more in the next couple chapters.
This is happening a lot right now with the tensions after the election. I remember seeing that a man who was making very sexist comments on twitter then essentially got doxxed by a bunch of people, which lead to his house getting lit on fire. The joint effort of all the upset people made that ordeal very quick. Doxxing in general is such a common outcome of crowdsourced harassment that it has basically become an inside joke on places like twitter.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Researchers analyzed the best players’ results for their research and were able to publish scientific discoveries based on the contributions of players.
This is a great use of how interconnected we can be through technology. What would normally be years of work for a few researchers could now be done within a reasonable timeframe through the dispersion of work among more people. The burden of a lot of that busy work being taken off the researchers is a great thing, and it was done via a fun game that people with time to spare played.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
If you are running your own site and suddenly realize you have a moderation problem you might have some of your current staff (possibly just yourself) start handling moderation. As moderation is a very complicated and tricky thing to do effectively, untrained moderators are likely to make decisions they (or other users) regret.
I feel like I remember this being a big problem with the streaming platform twitch. Often, the moderators were volunteer fans of the streamer, so they both didn't have real knowledge on how to moderate and weren't required to be there every time the person streamed, making the moderation system spotty and frequently biased.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Facebook uses hired moderators
The difference between paid and volunteer moderators makes a big difference in my opinion. I think that distinction defines the culture of a sight a lot, as either a distant entity that's being used by a person or as a malleable, human thing that can adapt to fit the users. I think people tend to perceive reddit as much more dynamic and human compared to Facebook because of this - the volunteer moderators aren't better, but the fact that people don't have a job on the line makes it so that they let their personal biases (good and bad) affect their moderation more.
-
- Nov 2024
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
But of course, new mediums can provide more information to sift through and more quickly, such as with the advent of the 24-hour news cycle in the 1990s, or, now social media.
With the election this week, everyone I know has been engaging in doomscrolling. It's kind of like the idea of a train wreck that people can't help but watch - even when we know that something is taking a turn for the worse, keeping up with every detail of the situation gives people that sense of control and understanding, while helping with that restless feeling. The 24 hour news cycle also makes anxiety based actions like this so much more prevalent because these newscasts now show people terrible and frightening things that are happening everywhere, all the time. Where before most people would only really know the details about local events, everyone has such broad access to tragedy around the world, which does help with pushing for change in society but also overwhelms people to the point of freezing and feeling like they can't do anything at all but watch.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
You know, it forces kids to not just live their experience but be nostalgic for their experience while they’re living it,
This is a very accurate comment, relating to how all young people acting almost in hindsight - less focused on what they want to do or say now and more focused on what their future selves will be able to reflect upon. I think that's why gen z has such a fixation around nostalgia in the way we always pull from the past when it comes to fashion, trends and jokes, etc. When you have that disconnect from your actions and yourself, it's much harder to feel human and so we feel like we have to mimic what we've seen before.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Fig. 12.5 Monica Lewinsky posted this quote tweet that answers a question with a side-eye emoji, which her audiences will understand as referring to her affair with then-US-president Bill Clinton.
Things like this also go viral because of the big shock factor that comes from breaking social norms in a creative way. When is happened, the affair was a massive deal that spurred serious negative reactions from the public. To then take that originally grim event and use it as the punchline for a relatively lame and unnecessary joke is what makes it so funny and viral, not the joke itself. I think I remember her making a similar type of joke with a song lyric - "you wouldn't last an hour in the asylum where they raised me" - where she put the lyric with a picture of the White House, when people were using that lyric to make jokes about somewhat dumb situations. Once again, the shock of using this very widely known, terrible event that ruined her life in part to take part in a dumb trend is what made it so viral.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
If a user posts a joke, and people share it because they think it is funny, then their intention and the way the content goes viral is at least somewhat aligned. If a user tries to say something serious, but it goes viral for being funny, then their intention and the virality are not aligned.
An interesting aspect of the intentional/not intentional vitality situation is the way people interpret the source content. Often the reference to the source can start off as ironic or unironic depending on if people are laughing at or with the original speaker, and then once the joke gets played out the tone will fully switch to the other as some sort of counteraction. I think it's really interesting how viral things get popular, then that moment of popularity is also used as a viral thing to spur a second wave.
-
- Oct 2024
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Though this is a big concern about Internet-based social media, traditional media sources also play into this: For example, this study: Cable news has a much bigger effect on America’s polarization than social media, study finds
I struggle to fully buy into the rhetoric of modern social media causing unprecedented levels of polarization because of facts like this. Up until very recently (and even very much now) people were heavily grouped by beliefs and lifestyles geographically. America specifically has a deep history with cultural enclaves, situations like "white flight," etc. that all served to group people by their views. With the technological innovations of recent, location has become less of a binding factor when it comes to communities - your neighbors could all work completely different jobs, be from different cultures, have different socioeconomic statuses and just be in this location for some random preference. Echo chambers have always existed, they're just more visible now that they are mostly based online.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Recommendations can go poorly when they do something like recommend an ex or an abuser because they share many connections with you.
This is one of the most frightening things about social media - how much of your content and feed is out of your control. Similar to the discussions we had on privacy, it is incredibly unfair that a person can be violated and exposed without even knowing. The hardest part is that often these algorithms don't "listen" to you - even if you opt to not let people who have your contact/ follow you elsewhere/ have mutuals get recommended your profile, it happens anyways. This issue also affects marginalized groups disproportionately as they tend to deal with the most persistent harassment and these algorithms only assist with that.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
The following tweet has a video of a soap dispenser that apparently was only designed to work for people with light-colored skin
This is the biggest issue when it comes to accommodating all people in design - the lack of representation among the people designing all of these things. In not having a person of color on the team, they never had the perspective to test for people with darker skin tones. Similarly, if social media designers don't include a variety of people - people with disabilities, people of color, etc. - they will always have blindspots.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
In this way of managing disabilities, the burden is put on the computer programmers and designers to detect and adapt to the disabled person.
I think that in the recent years, users and bots have begun to take on the burden of assuring accessibility in content on social media. Bots that transcribe videos and describe images, make things easier to understand through summary, and more all function for the benefit of those who might struggle with naturally navigating these sites. The existence of these user based tools might be giving these sites too much of a pass, though, as it keeps them from feeling motivated to adjust and improve their actual infrastructure.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Employees at the company misusing their access, like Facebook employees using their database permissions to stalk women
This is a clear example of the dangers of not having proper privacy with digital content. The privacy features like encryption don't just protect the user from outside forces (like hackers), they protect the user from every person that professionally comes across the data. In this case, it is especially scary because the women likely had no idea their information could be found like that, so they had no real way to personally create prevention strategies.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Sometimes privacy rules aren’t made clear to the people using a system. For example:
This is a very malicious thing done by companies to keep the user from properly protecting themselves. For social media, it makes it so the site always has an upper hand when it comes to your data because you never really know what privacy you have a right to in the site. For work/school systems, this is often used to punish / get evidence for types of misconduct, so it could be argued that it is beneficial. Still, making that knowledge not obvious is a sketchy thing to do as it can be used to pile on a person that someone already wants to get fired / expelled - the admin can be selective in who they actually punish for inappropriate comments.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
So those Redditors suggested they spam the site with fake applications, poisoning the job application data, so Kellogg’s wouldn’t be able to figure out which applications were legitimate or not
This is a good example of how these algorithms and data mining sites can sometimes be taken advantage of for the users' benefits. The fact that the idea on both reddit and TikTok got as popular as it did is a direct consequence of how good the algorithms are at connecting like minded people. At times this can create toxic communities that lack perspective, but in this case it was able to support a movement.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
For example, social media data about who you are friends with might be used to infer your sexual orientation.
I've seen something similar to this a lot on places like TikTok, but sort of in reverse, where their algorithm was able to see patterns in the things they liked/disliked and "outed" the user to themselves. It's both funny and creepy because on the one hand, the idea of a machine catching onto those details about you before you do is absurd, but on the other hand the fact that it can now make such accurate judgments is scary.
-