- Last 7 days
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
11.4.1. Filter Bubbles# One concern with how recommendation algorithms is that they can create filter bubbles (or “epistemic bubbles” or “echo chambers”), where people get filtered into groups and the recommendation algorithm only gives people content that reinforces and doesn’t challenge their interests or beliefs. These echo chambers allow people in the groups to freely have conversations among themselves without external challenge. The filter bubbles can be good or bad, such as forming bubbles for: Hate groups, where people’s hate and fear of others gets reinforced and never challenged Fan communities, where people’s appreciation of an artist, work of art, or something is assumed, and then reinforced and never challenged Marginalized communities can find safe spaces where they aren’t constantly challenged or harassed (e.g., a safe space)
If your views are never challenged, you can never improve and learn new things. The worst part would be getting filtered into a group you're not a part of, which can make your feed completely unenjoyable. It could feel like you are being attacked if you view multiple videos that challenge your views.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
When social media platforms show users a series of posts, updates, friend suggestions, ads, or anything really, they have to use some method of determining which things to show users. The method of determining what is shown to users is called a recommendation algorithm, which is an algorithm (a series of steps or rules, such as in a computer program) that recommends posts for users to see, people for users to follow, ads for users to view, or reminders for users. Some recommendation algorithms can be simple such as reverse chronological order, meaning it shows users the latest posts (like how blogs work, or Twitter’s “See latest tweets” option). They can also be very complicated taking into account many factors, such as: Time since posting (e.g., show newer posts, or remind me of posts that were made 5 years ago today) Whether the post was made or liked by my friends or people I’m following How much this post has been liked, interacted with, or hovered over Which other posts I’ve been liking, interacting with, or hovering over What people connected to me or similar to me have been liking, interacting with, or hovering over What people near you have been liking, interacting with, or hovering over (they can find your approximate location, like your city, from your internet IP address, and they may know even more precisely) This perhaps explains why sometimes when you talk about something out loud it gets recommended to you (because someone around you then searched for it). Or maybe they are actually recording what you are saying and recommending based on that. Phone numbers or email addresses (sometimes collected deceptively) can be used to suggest friends or contacts. And probably many more factors as well!
I've always known of recommendation algorithms and had a good idea of what information they use to show us results. But I can probably say that there's a lot of information I would never guess that they use. Maybe they use keywords from text messages or some other factor. The more you learn about an app's recommendation algorithm, the more you can manipulate it to show you more of what you want.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
10.3. Why It Matters Who Designs# 10.3.1. Who gets designed for# When designers and programmers don’t think to take into account different groups of people, then they might make designs that don’t work for everyone. This problem often shows up in how designs do or do not work for people with disabilities. But it also shows up in other areas as well.
When we design anything I believe we should start designing it to work for the most amount of people as possible. Of course, you cannot accommodate everyone, and that's why I think that after the original product is produced you must keep it up to date and implement any accessibility features you see popping up more consistently. This is needed for more people to be able to use the creation.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
A disability is an ability that a person doesn’t have, but that their society expects them to have.1 For example: If a building only has staircases to get up to the second floor (it was built assuming everyone could walk up stairs), then someone who cannot get up stairs has a disability in that situation. If a physical picture book was made with the assumption that people would be able to see the pictures, then someone who cannot see has a disability in that situation. If tall grocery store shelves were made with the assumption that people would be able to reach them, then people who are short, or who can’t lift their arms up, or who can’t stand up, all would have a disability in that situation. If an airplane seat was designed with little leg room, assuming people’s legs wouldn’t be too long, then someone who is very tall, or who has difficulty bending their legs would have a disability in that situation. Which abilities are expected of people, and therefore what things are considered disabilities, are socially defined. Different societies and groups of people make different assumptions about what people can do, and so what is considered a disability in one group, might just be “normal” in another.
This is a perfect definition and example of disability. I've learned that someone having a disability doesn't mean they have something wrong with them. It just means that they do not have the ability to do a specific thing. Since disability is based on society's expectations, if we change our expectations and make things more accessible, then we would be able to reduce the impacts of specific disabilities.
-
- Oct 2024
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
But while that is the proper security for storing passwords. So for example, Facebook stored millions of Instagram passwords in plain text, meaning the passwords weren’t encrypted and anyone with access to the database could simply read everyone’s passwords. And Adobe encrypted their passwords improperly and then hackers leaked their password database of 153 million users. From a security perspective there are many risks that a company faces, such as: Employees at the company misusing their access, like Facebook employees using their database permissions to stalk women Hackers finding a vulnerability and inserting, modifying, or downloading information. For example: hackers stealing the names, Social Security numbers, and birthdates of 143 million Americans from Equifax hackers posting publicly the phone numbers, names, locations, and some email addresses of 530 million Facebook users, or about 7% of all people on Earth
This is so fascinating to me. As they were mentioning risks they used real-world situations and linked the articles to learn more about the event. Each one of these events seems crazy and I can't believe I haven't heard of most of them. This shows that even when you think you are being private some things are just out of your control.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
There are many reasons, both good and bad, that we might want to keep information private. There might be some things that we just feel like aren’t for public sharing (like how most people wear clothes in public, hiding portions of their bodies) We might want to discuss something privately, avoiding embarrassment that might happen if it were shared publicly We might want a conversation or action that happens in one context not to be shared in another (context collapse) We might want to avoid the consequences of something we’ve done (whether ethically good or bad), so we keep the action or our identity private We might have done or said something we want to be forgotten or make at least made less prominent We might want to prevent people from stealing our identities or accounts, so we keep information (like passwords) private We might want to avoid physical danger from a stalker, so we might keep our location private We might not want to be surveilled by a company or government that could use our actions or words against us (whether what we did was ethically good or bad) When we use social media platforms though, we at least partially give up some of our privacy.
I believe that being overly private is better than being excessively public. You do not know what information someone is looking for or needs from you, so giving the bare minimum allows you to stay more private. Another thing I think is that while you are signing up for most social media, you are willingly giving up some forms of privacy. It may be small or unuseful, but social media weren't created to be private, so it's exactly what you're signing up for.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
8.7. Data Poisoning# People working with data sets always have to deal with problems in their data, stemming from things like mistyped data entries, missing data, and the general problem of all data being a simplification of reality. Sometimes a dataset has so many problems that it is effectively poisoned or not feasible to work with. 8.7.1. Unintentional Data Poisoning# Datasets can be poisoned unintentionally. For example, many scientists posted online surveys that people can get paid to take. Getting useful results depended on a wide range of people taking them. But when one TikToker’s video about taking them went viral, the surveys got filled out with mostly one narrow demographic, preventing many of the datasets from being used as intended. See more in A teenager on TikTok disrupted thousands of scientific studies with a single video – The Verge
I wonder how people working with data sets that may be poisoned go about fixing them or determining how detrimental the poison really is in the data set. It makes sense how one person with a specfic demographic fan base would be able to mess up the data in a survey but its kind of obvious thinking about it now. I wonder why i never heard of this story or ones similar.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Some data is directly provided to the platform by the users. Platforms may ask users for information like: email address name profile picture interests friends Platforms also collect information on how users interact with the site. They might collect information like (they don’t necessarily collect all this, but they might): when users are logged on and logged off who users interact with What users click on what posts users pause over where users are located what users send in direct messages to each other Online advertisers can see what pages their ads are being requested on, and track users across those sites. So, if an advertiser sees their ad is being displayed on an Amazon page for shoes, then the advertiser can start showing shoe ads to that same user when they go to another website.
I feel like i knew a lot about how different companies collect data on users but im always shocked to learn different ones that i didnt think about. I never would of thought that knowing when users are logged on and off is something that these companies tracked. Im curious to what other methods they use that people might not know about,
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
We can trace Internet trolling to early social media in the 1980s and 1990s, particularly in early online message boards and in early online video games. In the early Internet message boards that were centered around different subjects, experienced users would “troll for newbies” by posting naive questions that all the experienced users were already familiar with. The “newbies” who didn’t realize this was a troll would try to engage and answer, and experienced users would feel superior and more part of the group knowing they didn’t fall for the troll like the “newbies” did. These message boards are where the word “troll” with this meaning comes from.
In this class, I keep finding out new things about things I thought I already knew about. Similar to social media before the internet, I never thought of trolling prior to social media. But it makes sense because trolling doesn't even have to be only on social media; it can be in person or on different forms of the internet. I would think trolling started when the internet came out and people realized they could say things anonymously.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Trolling is when an Internet user posts inauthentically (often false, upsetting, or strange) with the goal of causing disruption or provoking an emotional reaction. When the goal is provoking an emotional reaction, it is often for a negative emotion, such as anger or emotional pain. When the goal is disruption, it might be attempting to derail a conversation (e.g., concern trolling), or make a space no longer useful for its original purpose (e.g., joke product reviews), or try to get people to take absurd fake stories seriously.
Trolling has become very common on social media. And I can't lie; I've been on both sides of the stick during my time on social media. But I realized that I gain absolutely nothing from it, and if someone has to receive happiness from ruining someone else's happiness, then that person isn't in a good spot mentally.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Arizona State University confirmed that they had no professors who matched the description of @Sciencing_Bi. Dr. McLaughlin’s and @Sciencing_Bi’s accounts were suspended from Twitter for violating Twitter policies, and Dr. McLaughlin eventually confirmed that she had completely invented @Sciencing_Bi.
Its pretty wild to me that @Sciencing_BI was able to lie about what they were doing for so long without being caught. Today, if you claimed to be a professor on X, everyone would go fact check it and i feel like instantly it would be caught and wouldnt gain much attention.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
“What’s more, we can see that the Android tweets are angrier and more negative, while the iPhone tweets tend to be benign announcements and pictures. …. this lets us tell the difference between the campaign’s tweets (iPhone) and Trump’s own (Android).”
I remember the feature on twitter where it would show you which device the author was posting from. I never saw value in it but in this situation it seems like it was useful to see which of the tweets are coming from his team or him specfically. I find it to be a little funny how trumps post were more angry while his team was more calm and basically cleaning up his mess.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
The comedy website Something Awful was created in 1999, and it included web forums where many popular memes of the day originated. While the Something Awful forums had edgy content, one 15-year-old member of the Something Awful forum called “Anime Death Tentacle Rape Whorehouse” was frustrated by content restrictions on Something Awful, and created his own new site with less restrictions: 4Chan. 5.5.2. 4Chan# 4Chan was created in 2003 by copying the code from a Japanese image-sharing bulletin board called Futaba or 2chan.
Its funny how even on a site named "Something awful" a person still felt restricted. But I cant say im mad about it because i feel like i heard a couple useful stories from 4chan due to the lack of restrictions.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Graffiti and other notes left on walls were used for sharing updates, spreading rumors, and tracking accounts Books and news write-ups had to be copied by hand, so that only the most desired books went “viral” and spread
I never thought of the idea of social media prior to internet. I always made the connections that social media was created after internet. But after reading this i can understand how graffti as well as books can be considered social media.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Data points often give the appearance of being concrete and reliable, especially if they are numerical. So when Twitter initially came out with a claim that less than 5% of users are spam bots, it may have been accepted by most people who heard it. Elon Musk then questioned that figure and attempted to back out of buying Twitter, and Twitter is accusing Musk’s complaint of being an invented excuse to back out of the deal, and the case is now in court.
I think Data and numbers are the type of information humans absorb the most. It simplifies it and makes it easier to understand, but this leaves gaps in the data that people kind of fill in themselves. Since the data doesn't tell the whole story and is more of just the final product, it isn't always reliable, and we shouldn't only focus on that.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Metadata is information about some data. So we often think about a dataset as consisting of the main pieces of data (whatever those are in a specific situation), and whatever other information we have about that data (metadata).
I find this to be interesting. I never thought of splitting the data types in a post to understand it better, but it makes sense now. The Metadata is less about the tweet and more about the background information of the post, while the Data is the main tweet and the point the person is trying to make.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
[Morten] Bay found that 50.9% of people tweeting negatively about “The Last Jedi” were “politically motivated or not even human,” with a number of these users appearing to be Russian trolls. The overall backlash against the film wasn’t even that great, with only 21.9% of tweets analyzed about the movie being negative in the first place.
This quote has a lot of information, but I'm a little confused about why Russian trolls would specifically pick The Last Jedi movie to review the bomb and leave bad comments. It's interesting to see how bots can be used for such great things, but also some random things such as this one.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Bots present a similar disconnect between intentions and actions. Bot programs are written by one or more people, potentially all with different intentions, and they are run by others people, or sometimes scheduled by people to be run by computers.
It's interesting to me how far bots have come. I never thought I could imagine the day we have to enforce laws into technology because of how powerful technology has become. This quote shows that a bot can be created for one reason, but it can be edited and used for another.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
“Rational Selfishness”: It is rational to seek your own self-interest above all else. Great feats of engineering happen when brilliant people ruthlessly follow their ambition.
I chose to comment on this quote specifically because I disagree that egoism is a good ethic to have in place of a community. I agree that there are a lot of great ones that can be achieved by only focusing on themselves, but I think a combative group effort would be more effective.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
“We’re not making it for that reason but the way ppl choose to use it isn’t our fault. Safeguard will develop.” But tech is moving so fast. That there is no way humanity or laws can keep up. We don’t even know how to deal with open death threats online.
This quote ties into our discussion in class about who the blame should fall on if someone negatively uses technology. And I think this sentence is deep because it shows that when people are creating something great, they still have to think about how it could be used negatively.
-