- Mar 2024
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
How have your views on social media changed (or been reinforced)?
After this class I am going to be a lot more mindful about how I interact on social media. After taking this class all I do is think about the documentary the social dilemma and how easy it is to fall into the trap of social media. Honestly I want to take a break. I feel like social media is very toxic and it is so easy to spread such harmful info and its exhausting to see.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
But even people who thought they were doing something good regretted the consequences of their creations, such as Eli Whitney who hoped his invention of the cotton gin would reduce slavery in the United States, but only made it worse, or Alfred Nobel who invented dynamite (which could be used in construction or in war) and decided to create the Nobel prizes, or Albert Einstein regretting his role in convincing the US government to invent nuclear weapons, or Aza Raskin regretting his invention infinite scroll.
This is why diversity in a workforce is so important. One person alone cannot possibly think of all of the effects of their invention or research or study. You need multiple people working on this collectively so you can have various different perspectives so you can evaluate the effects of something as widely as possible. Technology is so complex and there is no one size fits all for everything and a single product isnt going to be 100% positive for everyone whether it be that the product was unethically created by exploiting people using cheap, hard labor, or whether the production of such tech ends up hurting the climate even more.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
OLPC wanted to give every child in the world a laptop, so they could learn computers, believing he would benefit the world. But this project failed for a number of reasons, such as: The physical device didn’t work well. The hand-powered generator was unreliable, the screen too small to read. OLPC was not actually providing a “superior” product to the rest of the world. When they did hand out some, it didn’t come with good instructions. Kids were just supposed to figure it out on their own. If this failed, it must be the fault of the poor people around the world. It wasn’t designed for what kids around the world would actually want. They didn’t take input from actual kids around the world. OLPC thought they had superior knowledge and just assumed they knew what people would want.
Honestly this happens so often. Well-intentioned people try and create products that are beneficial for a marginalized group, or a group of people they are very unfamiliar with without getting any input from them on how this device could be developed to actually benefit that community. A problem with the world is that the fact that this tech was developed and actually went out to the people it was intended for is enough for people to call it a success. The fact of the matter is that so many people dont have technological access but no one cares because everything is for-profit and the tech industry is no exception.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
In particular, we want to highlight how the profession of programming went from being a disrespected, low-pay job for women, to being a highly respected and high paying job for men.
I never thought about the fact that women used to dominate the coding/programming field but then again I remember watching the movie hidden figures and a lot of the "human computers" were women. I think its so interesting how a field that is so complex was ever degraded and the fact that it was considered a low respected, low pay job for women is insane especially considering the fact that men dominate that field now and now it is highly respected and a high paying job.
-
- Feb 2024
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
In South Africa, when the oppressive and violent racist apartheid system ended, Nelson Mandela and Desmond Tutu set up the Truth and Reconciliation Commission. The commission gathered testimony from both victims and perpetrators of the violence and oppression of apartheid. We could also consider this, in part, a large-scale public shaming of apartheid and those who hurt others through it. Unlike the Nuremberg Trials, the Truth and Reconciliation Commission gave a path for forgiveness and amnesty to the perpetrators of violence who provided their testimony.
I really value rehabilitation and think this version of the truth and reconciliation commission is a great way for perpetrators to own their wrongdoings and in return it gives them the opportunity to heal those they have hurt and also heal themselves and move forward. I definitely think there are levels to it, I think the forgiveness aspect of this is what really speaks to me.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
While public criticism and shaming have always been a part of human culture, the Internet and social media have created new ways of doing so.
In a law class I took we talked about informal and formal enforcements of law and one thing I vividly remember was about this county in California called Shasta county where they rely heavily on public criticism and shaming and shunning as a way to "keep people in line". While I do think we do need to maintain a certain precedent for things, I think there is a way to go about it. I think educating people is important and with social media shaming and shunning is becoming much more common as seen in cancel culture.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
When do you think crowd harassment is justified (or do you think it is never justified)?
This is hard to say but my honest opinion is that if the person that is crowd harassed is open and posting and talking about hurting others and killing others and just being hateful I would say it is justfiifed. Justifying something like this is a verfy tough thing to do becuase 9/10 people do crowd harassasing to condemn people for things they dont personally identify with. They tend to break down the individual and I find that that is unjustifiable.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Have you experienced or witnessed harassment on social media (that you are willing to share about)? { requestKernel: true, binderOptions: { repo: "binder-examples/jupyter-stacks-datascience", ref: "master", }, codeMirrorConfig: { theme: "abcdef", mode: "python" }, kernelOptions: { kernelName: "python3", path: "./ch17_harassment" }, predefinedOutput: true } kernelName = 'python3' previous 17. Harassment
I have witnessed harassment on instagram and snapchat to one of my friends in middle school. They used to take warped images of this person and post to a made up page that had a bunch of our classmates on it. They would caption the images with very mean comments and really embarrass them. It was sad to see how affected my friend got because they were generally very nice so I didnt understand why someone would do that to them
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
What do you think a social media company’s responsibility is for the crowd actions taken by users on its platform?
I feel like this should be viewed from more of a consequentialist point of view. If your company opens itself up to crowdsourcing, you leave a lot of room for people to take your company from something good and positive into something negative and in that, yes, the company is responsible for the crowd actions. Now if the users on the platform go elsewhere, you absolutely shouldn't assume responsibility.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Wikipedia: Is an online encyclopedia whose content is crowdsourced. Anyone can contribute, just go to an unlocked Wikipedia page and press the edit button. Institutions don’t get special permissions (e.g., it was a scandal when US congressional staff edited Wikipedia pages), and the expectation that editors do not have outside institutional support is intended to encourage more people to contribute. Quora: An crowdsourced question and answer site.
I was in my sophomore year of high school doing research for a paper when I found out that anyone can edit wikipedia sites and that they are not very credible. I thought this was insane considering up until that point I relied heavily and believed everything I had saw on wikipedia. I guess what I am getting at is that while crowdsourcing is great for collaboration, it is risky and makes things less credible. There are a lot of issues, like how people could be spreading misinformation in this way.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
They have to get through many posts during their time, and given the nature of the content (e.g., hateful content, child porn, videos of murder, etc.), this can be traumatizing for the moderators:
I am not surprised about the fact that some social media platforms will use people in other countries as moderators and pay them next to nothing for their services, it is unfortunate but it happens all the time now. In that I had never really given much thought as to how some of the workers may feel being exposed to the content that they have to moderate. I never considered the fact that it could be traumatizing to read most of the content and it makes me question whether having these moderators is ethical or is there a more ethical way to have moderators that doesnt expose them to never ending hateful content. Almost like a moderator for a moderator.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
But this meaning of “moderation” grew out of a wider, more generic concept of moderation. You might remember seeing moderation coming up in lists of virtues in virtue ethics, back in Chapter 2. So what does moderation (the social practice of limiting what is posted) have to do with moderation (the abstract ethical quality)?
I feel like a lot of terms have grown to mean different things that are completely different from their original meaning. I feel like especially in todays world with social media we takes concepts and only apply them when they benefit us and in the ways they benefit us; we choose their meaning and what parts to take. Moderation now is a concept that is left up to the creator where they put limits on what people can and cannot post to keep this flow. The original concept behind moderation is that your actions have an adverse reaction that leads to a deeper understanding
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
For example, Facebook has a suicide detection algorithm, where they try to intervene if they think a user is suicidal (Inside Facebook’s suicide algorithm: Here’s how the company uses artificial intelligence to predict your mental state from your posts). As social media companies have tried to detect talk of suicide and sometimes remove content that mentions it, users have found ways of getting around this by inventing new word uses, like “unalive.” Larger efforts at trying to determine emotions or mental health through things like social media use, or iPhone or iWatch use, have had very questionable results, and any claims of being able to detect emotions reliably are probably false.
There is not a one size fits all cap in determining whether or not someone is depressed, suicidal, etc. And likewise, just because you may think someone is depressed or suicidal doesn't mean that they are. I think that this attempt is well-intentioned but at the same time there is too much bias in creating algorithms that try to detect this and 'correct it'. The bigger issue lies in the fact that the algorithm will try to correct the person. I think to be safe it is best to avoid this even as well-intentioned as it is.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
“If [social media] was just bad, I’d just tell all the kids to throw their phone in the ocean, and it’d be really easy. The problem is it - we are hyper-connected, and we’re lonely. We’re overstimulated, and we’re numb. We’re expressing our self, and we’re objectifying ourselves. So I think it just sort of widens and deepens the experiences of what kids are going through.
This perfectly captures some of the ways I have been looking at social media lately. I do this thing no where I do a social media cleanse and deactivate my accounts and/or delete the apps from my phone for like a month and it makes me feel way less lonely and overstimulated. social media is great for staying connected with my peers but at the same time I also feel so disconnected.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
How do you think attribution should work when copying and reusing content on social media (like if you post a meme or gif on social media)? When is it ok to not cite sources for content? When should sources be cited, and how should they be cited? How can you participate in cultural exchange without harmful cultural appropriation?
I understand that concept of copying and remixing that happens online and while I find it unethical, I feel like you post under the assumption that a bunch of people will see it and you have no idea and no control over what happens with it. That is the unfortunate part of the media when posting content, this sort of powerlessness in deciding how it is used, when it is used, and if people will give you credit for it. It is hard to defend copyright in social media.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
We’ll include several examples on this page from the TikTok Duet feature, which allows people to build off the original video by recording a video of themselves to play at the same time next to the original. So for example, This tweet thread of TikTok videos (cross-posted to Twitter) starts with one Tiktok user singing a short parody musical of an argument in a grocery store. The subsequent tweets in the thread build on the prior versions, first where someone adds themselves singing the other half of the argument, then where someone adds themselves singing the part of their child, then where someone adds themselves singing the part of an employee working at the store1:
Honestly this is a great feature and one of the reasons why it is so easy for content nowadays to go viral. Collaboration is huge and it can bring different audiences together. I like the collaboration feature on social media because I feel like it makes us more interconnected.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Friends or Follows:# Recommendations for friends or people to follow can go well when the algorithm finds you people you want to connect with. Recommendations can go poorly when they do something like recommend an ex or an abuser because they share many connections with you.
This is probably one of the most annoying things tbh. I have gotten people recommended to me that I don't like or have unfollowed. I feel like there were instances in which I unfollowed a person and they popped up on my feed more than ever.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Now, how these algorithms precisely work is hard to know, because social media sites keep these algorithms secret, probably for multiple reasons: They don’t want another social media site copying their hard work in coming up with an algorithm They don’t want users to see the algorithm and then be able to complain about specific details They don’t want malicious users to see the algorithm and figure out how to best make their content go viral
I think that this is a great way to currate a users feed to keep them hooked on the social media site. One thing that was a trend on TikTok was people trying to mess with and change their algorithms so that it would be different from what they were used to. There is data that is collected in order for the algorithms to take effect for each user and it makes me wonder how much data is collected, what type of data, and how much influence one data set has over another.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
How comfortable are you with Google knowing (whether correctly or not) those things about you?
I actually have this feature turned off but I like being able to see how my data is being used to create personalized content for myself. On the other hand this feature is kind of scary because of the fact that I am always being tracked. I value privacy and I know many others do too. Sometimes I get curious about certain topics and do a deep dive into research but I dont want to continue seeing ads or content about that one thing over and over and over again. I wonder how the incognito tab on google works, does it still collect data, if so, what types and how does it use it?
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
when users are logged on and logged off who users interact with What users click on what posts users pause over where users are located what users send in direct messages to each other
I watched a documentary called the social dilemma and it opened my eyes to how advanced we are in collecting data and using that to tailor social media sites specifically to an individual to keep them coming back. This data as simple as it seems has a profound impact on how a site functions for users. It is scary to think about the fact that we program platforms to basically keep us addicted to that platform. It really gave me a new perspective on social media and the way data is collected. I highly recommend watching it.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Non-User Information: Social Media sites might collect information about people who don’t have accounts, like how Facebook does
I don't like this. People choose not to have certain social media accounts for the reason that they know their information may not always be protected in the way they'd like for social media sites like facebook to still collect information from people who have intentionally opted out of the site is very unethical.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Phishing attacks, where they make a fake version of a website or app and try to get you to enter your information or password into it. Some people have made malicious QR codes to take you to a phishing site. Many of the actions done by the con-man Frank Abagnale, which were portrayed in the movie Catch Me If You Can One of the things you can do as an individual to better protect yourself against hacking is to enable 2-factor authentication on your accounts.
As a leasing agent who has access to a lot of personal information for the tenants, data for the company, etc at the properties I work at, I have to do trainings to be able to recognize phishing emails or other fake things that come to us so I don't unintentionally give access or give away important information. I have to have 2 factor authentication activated on my work email and clock in program and it does make me feel better about my employee access security. After I was hacked on twitter I went through all of the apps I use, and my emails to activate 2 factor authentication and I feel a lot better because I can detect someone trying to hack me before it happens and i can stop it.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
In order for these users to still get the information intended from the images, the image can come with alt-text. You can read more about alt-text in this New York Times feature Reddit unfortunately doesn’t allow alt-text for their images. So while we were going to have a programming demo here to look up the alt-text, there is no alt-text on images uploaded to Reddit to look up, meaning this site is unfriendly to blind or low-vision users.
I work in social media and marketing for a real estate company and I manage 3 different properties websites and marketing material. When I took on this role a little over a year ago I had my first experience with alt-text and at first I didn't even know what it was but after doing some research I saw how valuable it was to add that to my website so we could be accessible to as many users as possible. I think at this point it should be a requirement to have this on websites and apps.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Most humans are trichromats, meaning they can see three base colors (red, green, and blue), along with all combinations of those three colors. Human societies often assume that people will be trichromats. So people who can’t see as many colors are considered to be color blind, a disability. But there are also a small number of people who are tetrachromats and can see four base colors2 and all combinations of those four colors. In comparison to tetrachromats, trichromats (the majority of people), lack the ability to see some colors. But our society doesn’t build things for tetrachromats, so their extra ability to see color doesn’t help them much. And trichromats’ relative reduction in seeing color doesn’t cause them difficulty, so being a trichromat isn’t considered to be a disability.
I think it is unfortunate that there are not many other technological display alternatives for people who do not fit with the majority of people who are trichromats. I wonder if we will move to where we offer other display alternatives for people who are considered color blind. what would that look like?
-
- Jan 2024
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
In the Black Lives Matters protests of 2020, Dallas Police made an app where they asked people to upload videos of protesters doing anything illegal. In support of the protesters, K-pop fans swarmed the app and uploaded as many K-pop videos as they could eventually leading to the app crashing and becoming unusable, and thus protecting the protesters from this attempt at Police surveillance.
I like what the K pop fans did in this situation. However, there is also a line that needs to be drawn because there are some situations where this type of trolling/flooding could be especially harmful. There is a difference between protesting and rioting and in instances where rioting occurs I think the police having access to the type of content they were requesting would be beneficial.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
or try to get people to take absurd fake stories seriously.
Honestly this is a downfall of social media. Your feed becomes tailored to the content you stay on longer, like, comment or share. It becomes a conglomeration of things you interact with so it keeps you hooked but some of the content can be misinformation, fake stories, or fake news and people can take it literally and not be able to determine whether it is factual or not especially because of the trolls that intentionally create fake things to misinform others.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Different communities have different expectations and meanings around behavior and presentation. So what is appropriate authentic behavior depends on what group you are from and what group you are interacting with, like this gif of President Obama below: Fig. 6.6 President Obama giving a very different handshakes to a white man and a Black man (Kevin Durant). See also this Key & Peele comedy sketch on greeting differences with Jordan Peele playing Obama, and also Key & Peele’s Obama’s Anger Translator sketch.
I use code-switching on a daily basis. The way in which I speak and act at work or in class is completely different than the way in which I act at home with my family and friends. Even my online self tends to showcase my more formal side as if I am at work or another professional setting. This is honestly why I have multiple accounts; One of my accounts is more business and a representation of how I want to be perceived when in a formal setting whereas my other account highlights more of the ways I act at home with friends or family. Code-switching makes you feel safe and accepted in an environment that you feel would not be as accepting of your true self.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
“My analysis … concludes that the Android and iPhone tweets are clearly from different people, “posting during different times of day and using hashtags, links, and retweets in distinct ways, “What’s more, we can see that the Android tweets are angrier and more negative, while the iPhone tweets tend to be benign announcements and pictures. …. this lets us tell the difference between the campaign’s tweets (iPhone) and Trump’s own (Android).”
This makes me wonder how much of the posting was actually Donald Trump. We have talked extensively about bots and how they can manage social media and basically function as if they were a normal regular human making posts. I remember seeing a bunch of his tweets and thought some were crazy but now I am wondering if those were actually him or bots. Also, if they are bots posting, why is this the narrative that the people who programmed the bots are going? Who signed off on that?
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Before electronic computers were generally available, when scientists wanted the results of some calculations, they sometimes hired “computers,” which were people trained to perform the calculations.
This reminds me of the women who worked at Nasa to solve calculations that would send a man into space for the first time. It is amazing what people can do and to see how we have been able to create machines to compute things for us on an even more accurate scale is beyond incredible. There is so much thought and critical thinking that goes into computing this type of information. I love how smart we are.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Justine lost her job at IAC, apologized, and was later rehired by IAC.
This is wild to me. I think this is a bad reflection of the company to fire her as a publicity thing and then just rehire her once the heat died down a bit. One thing that is trending on tik tok is users reminding eachother that they are leaving a digital footprint when they are posting wild things on the internet. I feel like it is an accountability thing and one benefit to the internet.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Another example of intentionally adding friction was a design change Twitter made in an attempt to reduce misinformation: When you try to retweet an article, if you haven’t clicked on the link to read the article, it stops you to ask if you want to read it first before retweeting.
I actually think that intentionally adding friction was a goof thing on twitters behalf because there is so much misinformation and half truths on the internet that at first glance it is hard to determine whether something is factual or not. Someone else had mentioned that it might make someone think twice before just mindlessly retweeting and I agree. I think it could also encourage people to seek out more information on topics they're reading about which is an added bonus.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
In the 1980s and 1990s, Bulletin board system (BBS) provided more communal ways of communicating and sharing messages. In these systems, someone would start a “thread” by posting an initial message. Others could reply to the previous set of messages in the thread
This to me kind of seems like how twitter operates. With tweets and retweets and the tweet threads that people use to interact with one another is like one continuous bulletin board system. You can go down a rabbit hole with things like this.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
So, for example, when Twitter tells me that the tweet was posted on Feb 10, 2020, does it mean Feb 10 for me? Or for the person who posted it? Those might not be the same.
It is baffling to me that I never thought of this! Someone else mentioned how this difference in time could reflect in the internet information gap. I feel like it would make social media information so much harder to gather.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
When computers store numbers, there are limits to how much space is can be used to save each number. This limits how big (or small) the numbers can be, and causes rounding with floating-point numbers
I guess this makes a lot of sense but it is something I had never thought of before. Everything has its own storage set. As someone who has not had much experience with coding I would like to know more about floating numbers and how exactly you use them. I think app creation is cool but I never thought about something like twitter having a limit on storage, it makes sense, it just never occured to me.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Fig. 3.1 A photo that is likely from a click-farm, where a human computer is paid to do actions through multiple accounts, such as like a post or rate an app. For our purposes here, we consider this a type of automation, but we are not considering this a “bot,” since it is not using (electrical) computer programming.
I have never heard of a Click-Farm before. I think that this is very interesting and I wonder what the rules and regulations are with this. Is it an underground sort of thing? What are some of the ethical concerns with click farms? I just feel like this is a very unfair thing to have access to.
-