- Last 7 days
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Network effect: Something is more useful the more people use it (e.g., telephones, the metric system). For example, when the Google+ social media network started, not many people used it, which meant that if you visited it there wasn’t much content, so people stopped using it, which meant there was even less content, and it was eventually shut down. Network power: When more people start using something, it becomes harder to use alternatives. For example, Twitter’s large user base makes it difficult for people to move to a new social media network, even if they are worried the new owner is going to ruin it, since the people they want to connect with aren’t all on some other platform. This means Twitter can get much worse and people still won’t benefit from leaving it.
This scenario happened recently with Instagram's new threads feature. It tried to become the next Twitter by copying most of Twitter's feature. It released at a good time but didn't do a good job of bringing and maintaining it's userbase.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
For example, the actor Stellan Skarsgård complained that in the film industry, it didn’t matter if a company was making good movies at a decent profit. If there is an opportunity for even more profit by making worse movies, then that is what business leaders are obligated to do:
This happens way too often. Many times a show or series is ruined by a bad sequel or cheap adaptation. It's really easy to spot out cheap cash grab movies that only reel you in with name value rather than movie quality.
-
- Nov 2024
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
retract button.
While there isn't a retract button on Twitter, Twitter now has a community notes function. The community notes function similar to a retract option. It allows users or other to make clarifications about a post. It does a good job of preventing the spread of misinformation.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Fig. 18.3 Part 2 of Jeremy Schneider’s apology
It's rare to see apologies to things like this on Twitter. It's even more rare to get a genuine and fleshed out apology like this one. Most people just delete their original tweet and move on, but the poster made the effort to go and read in a bar.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
They might also try to use the legal system, but online harassment is often not taken seriously, and harassers often use tactics that avoid being illegal.
It's really difficult to enforce online harassment through the legal system. Harassment typically involves a lot of people. Individually tracking down and reprimanding all of them is extremely difficult especially if they're located in another country.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Harassment is behavior which uses a pattern of actions which are permissible by law, but still hurtful.
Due to the pattern requirement, it can be really difficult to enforce harassment. A lot of the time, the pattern gets covered up with other things, especially in the workplace.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Fig. 16.2 A note written with intentionally bad handwriting.
This example reminds me of the site Genius, which allows for people to crowdsource music lyrics. People will try and figuring out difficult song lyrics and let other people correct and fill in missing lyrics.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Do you think there are ways a social media platform can encourage good crowdsourcing and discourage bad crowdsourcing?
I think it's very hard for social media platforms to encourage good crowdsourcing while also discouraging bad crowdsourcing. Crowdsourcing on the internet will always be hit or miss due to the witch hunt nature of the internet.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
15.1.6. Automated Moderators (bots)
Automated mods have become a great tool for small creators to easily moderate their own comments. The only issue with automated moderators is the lack of flexibility. Some things will get accidently flagged and somethings will end up bypassing the filter.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Individual users are often given a set of moderation tools they can use themselves, such as:
While these tools are nice to have, If you have a larger audience it's not feasible to have to moderate your comments. It takes so much time to filter through comments, and people can easily bypass these moderation tools
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
First let’s consider that, while social media use is often talked of as an “addiction” or as “junk food,” there might be better ways to think about social media use, as a place where you might enjoy, connect with others, learn new things, and express yourself.
My way of enjoying social media is only using it to connect with people I know and am friends with. It's not worth it to branch out too far from my circle and content.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
The seeking out of bad news, or trying to get news even though it might be bad, has existed as long as people have kept watch to see if a family member will return home safely. But of course, new mediums can provide more information to sift through and more quickly, such as with the advent of the 24-hour news cycle in the 1990s, or, now social media.
Doom scrolling has gotten worse recently with algorithms becoming more refined. Things like Facebook and Tiktok are really good at constantly recommending similar posts leading to constant scrolling.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
“Content going viral is overwhelming, intimidating, exciting, and downright scary..”
As cool as being famous can be, they're lots of downsides. Everything you do garners more attention and you become at risk for doxing or accidently exposing yourself.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
The meme above is composed of many pieces copied from elsewhere, and modified and put together. Here are the pieces we could identify:
The majority of memes are created by a small group of people reposting a slightly edited image. For example the photo of the guy running from a floating man. There's now an insane amount of posts using that template with slightly different words.
-
- Oct 2024
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
A recommendation algorithm like YouTube’s tries to discover categories of content, so the algorithm can recommend more of the same type of content. F.D. Signifier explains:
I remember when Youtubers were forced to make their videos a certain length in order to be picked up by the algorithm and make ads. Every Youtube video on my feed would be just a couple seconds over 10 minutes and have dead space towards the end to meet the time requirements.
-
Recommendations for friends or people to follow can go well when the algorithm finds you people you want to connect with.
I'm still amazed at how well the connections algorithm works. My Instagram's suggested feed does a good job of showing me people I know to follow, especially when we only have a few connections with each other.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Design Justice
It's hard to cover all the possible cases with disabilities because the population with disabilities is just so small. Designers can really only make changes once someone with a specific disability can't use something. Over time, it'll get better, buildings are more wheel chair accessible. Online media has been better for people hard of hearing.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
And, in general, cultures shift in many ways all the time, making things better or worse for different disabled people.
It's difficult to make things accessible for people with disabilities, especially if the disability isn't common. Accessibility is usually an afterthought because it isn't something the designer focuses on. It doesn't affect the majority of users.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
For example, a social media application might offer us a way of “Private Messaging” (also called Direct Messaging) with another user.
I feel that social media sites direct messages should be private from other users but shouldn't be private to the companies. Many illegal things happen through private messaging, and we shouldn't expect our dms to be fully private.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
One of the things you can do as an individual to better protect yourself against hacking is to enable 2-factor authentication on your accounts.
I've started enabling 2fa on all my accounts but 2fa can be buggy and annoying to use. Some use email, some use texts and others require an app. It gets annoying trying to switch between and find the 2fa code.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Sometimes a dataset has so many problems that it is effectively poisoned or not feasible to work with.
For my final project in info201, I had to find a dataset to create a website around. So many datasets had incomplete or unrepresented data. It was hard to choose one to use, especially when working with R, and dealing with null values.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Try this yourself and see what Google thinks of you!
I had personal advertisements off but it's still crazy to see what data points google uses to show you ads. It keeps track of relationship status, work status and even household income.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
In the youtube comments, some people played along and others celebrated or worried about who would get tricked.
This happens a lot online now. Many people new to certain hobbies won't understand specific terms, leading to many falling for these troll/satire videos. This is especially prevalent in smaller communities where everyone is more active.
-
In 2011, Am
I love reading these types of reviews. Especially on items that are barely functional or obviously a scam. The troll reviews are really fun to read and share. The only problem with these reviews is that it often inflates the ratings of these items.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Fig. 6.8 Fred Rogers explaining to my younger sister (Jessica), that he can’t come visit our house.
I'm surprised that they mailed both of them back and actually responded to their messages. It's really in line with the Mr. Rodgers brand but that means that someone read through their messages and wrote a personal response.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
For example:
Code switching is almost mandatory in public. You can't talk and act the same way around everyone. The way you act to customers and coworkers is going to be different from your friends and family.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Sometimes designers add friction to sites intentionally.
Some other examples of friction in web design is TikTok's "you've been watching too long". Other social medias and games have been implementing this as well.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
And now as the internet and social media have taken off in the early 2000s, we are again in a world full of rumors and conspiracy theories.
It's interesting how we are back in the age of rumors and theories. Back then, people didn't have the tools to easily fact check. In modern times, people don't have the attention span to fact check what they see on the internet. Those short, half truth rumors spread like wildfire now.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Data points often give the appearance of being concrete and reliable, especially if they are numerical. So when Twitter initially came out with a claim that less than 5% of users are spam bots, it may have been accepted by most people who heard it.
Statistics is always something that can be used to mislead or trick readers. In this example, it mentions only 5% of users being bots, but doesn't mention what counts as a user. There could be many inactive accounts lowering the overall number of bots, but not lowering the percentage of bots that users see.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
One could go abroad, and take a picture of a cute kid running through a field, or a selfie with kids one had traveled to help out. It was easy, in such situations, to decide the likely utility of posting the photo on social media based on the interest it would generate for us, without
I've always felt weird seeing these kinds of posts on my feed. It feels weird morally to post about something like helping the less fortunate. The posts make the good deed feel disingenuous.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Why would users want to be able to make bots? How does allowing bots influence social media sites’ profitability?
Allowing bots on websites allows for more "users" and data traffic. Having more data traffic allows for advertiser to make more money. Users could use bots for many reasons but the main reason is to try and grow an account.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
On the other hand, some bots are made with the intention of harming, countering, or deceiving others. For example, people use bots to spam advertisements at people. You can use bots as a way of buying fake followers, or making fake crowds that appear to support a cause (called Astroturfing).
I commonly see bots used for online shopping. Lots of scalpers use bots as a way to get concert tickets or new clothes in order to resell for a profit.
-
- Sep 2024
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Taoism
Taoism focuses less on the individual and more on the world. Individually, we cannot make much of a difference, so it's better to let the universe control.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
How do you divide out responsibility for a bots actions between the person writing the code and the person running the program?
I feel that most of the responsibility should lie in the person who ran the code. They had the final say on what the bot would do rather than the programmer who just created the bot.
-