- Last 7 days
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Preventing Competition# Most importantly, they can prevent a competitor from taking hold. If these people got Internet access through a non-Facebook option, they might join a new or competing social media network, and through the network effect, that competing Network might take off. And that would be a threat to Meta trying to corner the market on Social Media. A particularly telling example of this is the story of WhatsApp: Though WhatsApp was founded in the US (in 2009), it became very popular outside the US, becoming much more commonly used than Facebook Messenger. Facebook was terrified of losing out on the non-US market, since they wanted to control everything, so in 2014 Facebook spent $19 billion dollars to purchase WhatsApp:
I find this interesting how when huge companies find competition with smaller companies, they decide to purchase that smaller company instead of making there on company better.
-
- Nov 2024
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
After the defeat of Nazi Germany, prominent Nazi figures were put on trial in the Nuremberg Trials. These trials were a way of gathering and presenting evidence of the great evils done by the Nazis, and as a way of publicly punishing them. We could consider this as, in part, a large-scale public shaming of these specific Nazis and the larger Nazi movement. Some argued that there was no type of reconciliation or forgiveness possible given the crimes committed by the Nazis. Hannah Arendt argued that no possible punishment could ever be sufficient:
I feel like people don't do much to people who are nazi Germany, yes people who act like that get there accounts removed from social media but in my opinion that is not enough. Also feel like the US government has the technology to track these people but have a slow attempt not to. I just saw a video of 5 white colored men in Chicago saying slurs walking down the sidewalk with the flag and to my notice they still haven't faced any charges.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
While public criticism and shaming have always been a part of human culture, the Internet and social media have created new ways of doing so. We’ve seen examples of this before with Justine Sacco and with crowd harassment (particularly dogpiling). For an example of public shaming, we can look at late-night TV host Jimmy Kimmel’s annual Halloween prank, where he has parents film their children as they tell the parents tell the children that the parents ate all the kids’ Halloween candy. Parents post these videos online, where viewers are intended to laugh at the distress, despair, and sense of betrayal the children express. I will not link to these videos which I find horrible, but instead link you to these articles:
I feel like public shaming should be more researches or looked at, with having something like cancel culture. Public shaming can have a horrible affect on people from giving them all sorts of types of feelings, therefore I feel like cancel culture on shaming should be huge and people should watch what they are saying because a lot of people take things seriously facing big issues in the United States from school shootings to suicide attempts.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
The Ku Klux Klan (KKK) is an American white-supremacist terrorist organization known to harass and murder Black people and others. Members of the KKK keep their identity secret by wearing white robes and hoods over their faces. Often influential and powerful members of society were part of the KKK, such as police officers and government officials. In the 1920s, a magazine colled Tolerance published lists of members of the KKK and their addresses, what we would now call “doxing.” They hoped to end the hateful and violent KKK organization.
I feel like doxxing in this type of harassment involving the kkk is very important. A lot of these comments mentioned how it should be important to dox police members since there has been a lot of harassment towards people of color, and there has been cases of police offers being apart of the kkk on the low and you won't know that because they have this weird costume on. So therefore I feel like police officers should have a deeper background check to help harassment issues go down.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Individual harassment (one individual harassing another individual) has always been part of human cultures, bur social media provides new methods of doing so. There are many methods by which through social media. This can be done privately through things like: Bullying: like sending mean messages through DMs Cyberstalking: Continually finding the account of someone, and creating new accounts to continue following them. Or possibly researching the person’s physical location. Hacking: Hacking into an account or device to discover secrets, or make threats. Tracking: An abuser might track the social media use of their partner or child to prevent them from making outside friends. They may even install spy software on their victim’s phone. Death threats / rape threats Etc.
I think social media apps should have more attention on harassment, and I also believe no one should ever fear posting on social media just cause they think they will get harassed. I remember we had a huge cyberbullying issue a couple years back, but we have almost done nothing to improve it with only just a few apps banning your account and demonetizing videos. But I still see a lot of people harassing each other on big platforms such as tik Tok and Youtube shorts.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
When looking at who contributes in crowdsourcing systems, or with social media in generally, we almost always find that we can split the users into a small group of power users who do the majority of the contributions, and a very large group of lurkers who contribute little to nothing. For example, Nearly All of Wikipedia Is Written By Just 1 Percent of Its Editors, and on StackOverflow “A 2013 study has found that 75% of users only ask one question, 65% only answer one question, and only 8% of users answer more than 5 questions..” We see the same phenomenon on Twitter:
I found this part of the reading very interesting because I would say I am one of the lurkers on social media as I am always lurking on Instagram, and not posting at all.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
15.1. Types of Content Moderator Set-Ups# There are a number of different types of content moderators and ways of organizing them, such as: 15.1.1. No Moderators# Some systems have no moderators. For example, a personal website that can only be edited by the owner of the website doesn’t need any moderator set up (besides the person who makes their website). If a website does let others contribute in some way, and is small, no one may be checking and moderating it. But as soon as the wrong people (or spam bots) discover it, it can get flooded with spam, or have illegal content put up (which could put the owner of the site in legal jeopardy). 15.1.2. Untrained Staff# If you are running your own site and suddenly realize you have a moderation problem you might have some of your current staff (possibly just yourself) start handling moderation. As moderation is a very complicated and tricky thing to do effectively, untrained moderators are likely to make decisions they (or other users) regret. 15.1.3. Dedicated Moderation Teams# After a company starts working on moderation, they might decide to invest in teams specifically dedicated to content moderation. These teams of content moderators could be considered human computers hired to evaluate examples against the content moderation policy of the platform they are working for. 15.1.4. Individuals moderating their own spaces# You can also have people moderate their own spaces. For example: when you text on the phone, you are in charge of blocking numbers if you want to (though the phone company might warn you of potential spam or scams) When you make posts on Facebook or upload videos to YouTube, you can delete comments and replies Also in some of these systems, you can allow friends access to your spaces to let them help you moderate them. 15.1.5. Volunteer Moderation# Letting individuals moderate their own spaces is expecting individuals to put in their own time and labor. You can do the same thing with larger groups and have volunteers moderate them. Reddit does something similar where subreddits are moderated by volunteers, and Wikipedia moderators (and editors) are also volunteers. 15.1.6. Automated Moderators (bots)# Another strategy for content moderation is using bots, that is computer programs that look through posts or other content and try to automatically detect problems. These bots might remove content, or they might flag things for human moderators to review.
I feel like every social media app should have some sort of moderation, whether it's trained or not. Especially Twitter, he can still have the app as real as he wants but should have some sort of moderation around cyberbullying and if he already has it, I think it needs more of it.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Another concern is for the safety of the users on the social media platform (or at least the users that the platform cares about). Users who don’t feel safe will leave the platform, so social media companies are incentivized to help their users feel safe. So this often means moderation to stop trolling and harassment. 14.1.4. Potentially Offensive# Another category is content that users or advertisers might find offensive. If users see things that offend them too often, they might leave the site, and if advertisers see their ads next to too much offensive content, they might stop paying for ads on the site. So platforms might put limits on language (e.g., racial slurs), violence, sex, and nudity. Sometimes different users or advertisers have different opinions on what should be allowed or not. For example, “The porn ban of 2018 was a defining event for Tumblr that led to a 30 percent drop in traffic and a mass exodus of users that blindsided
I still find it insane that twitter or X has no form of content moderation, which I don't know what to believe whether it's a good thing or a bad thing. Elon Musk said he has it like that for a reason so people know what's happening in the real world or so it can be "real" but sources have shown recently after trump got elected there has been an increase on cyberbullying specifically on Black Americans.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
For example, Facebook has a suicide detection algorithm, where they try to intervene if they think a user is suicidal (Inside Facebook’s suicide algorithm: Here’s how the company uses artificial intelligence to predict your mental state from your posts). As social media companies have tried to detect talk of suicide and sometimes remove content that mentions it, users have found ways of getting around this by inventing new word uses, like “unalive.”
I feel like every social media app should have some sort of Suicide prevention algorithm inside there app, since America has a big mental health issue, instead of just calling the prevention line.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
“If [social media] was just bad, I’d just tell all the kids to throw their phone in the ocean, and it’d be really easy. The problem is it - we are hyper-connected, and we’re lonely. We’re overstimulated, and we’re numb. We’re expressing our self, and we’re objectifying ourselves. So I think it just sort of widens and deepens the experiences of what kids are going through.
I feel like any kid should not have the access to social media until they turn an age wish is like 13-15. I personally got raised like that getting my first phone when I was 14, because there is just so much info out there that could be demographic and uncensored which would ultimately change your kids behavior, such as X by Elon Musk.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
A meme is a piece of culture that might reproduce in an evolutionary fashion, like a hummable tune that someone hears and starts humming to themselves, perhaps changing it, and then others overhearing next. In this view, any piece of human culture can be considered a meme that is spreading (or failing to spread) according to evolutionary forces. So we can use an evolutionary perspective to consider the spread of:
I find it interesting that people can share culture and mutations through memes, just like a type of framework. People share the meme hoping to get some type of reaction or based on the meme they can make an agreement, for example lets say the meme is a political meme, you can find out a lot about the person by just sharing that meme.
-
- Oct 2024
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
In 2016, the Twitter account @Sciencing_Bi was created by an anonymous bisexual Native American Anthropology professor at Arizona State University (ASU). She talked about her experiences of discrimination and about being one of the women who was sexually harassed by a particular Harvard professor. She gained a large Twitter following among academics, including one of the authors of this book, Kyle.
I feel like authenticity is ver important in social media. For example when people argue on social media or when they see they are wrong in an argument the jump to saying stuff that is not authentic pulling sources that are not real and making stuff up, which is very deceiving for people trying to make that connection on social media
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
es of data on social media platforms are organized as lists, such as lists of friends or followers lists of posts lists of photos in a post lists of people who liked a post etc.
This made m think about how a person would like to get there followers up on instagram and how easy it is to just follow a bunch of people, but not think about how much coding/programming there is to follow a single person. It really makes me think how complicated programming is because one single follow equals to be one line of code and imagine that for 10s of thousands of follows.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
you
I feel like the user interface is a super important part of social media as if we talked about in class for making connections and meeting new people. I forgot what that circle you made in class was but it goes over what social media does to you as a person and what you can gain from it as well. But overall the user interface I would say connects a lot to the circle you showed us.
-