- Last 7 days
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
hat if social media sites were governed by their users instead of by shareholders (e.g., governed by the subjugated instead of the colonialists)?
While I have hope that this would lead to more ethical behavior on the part of the platform holder, understanding how anarchy sites (8chan) work, I do not believe that things can soley be ran by users.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
fiduciary duty,
The concept of fiduciary duty is why more economic left minded people proclaim "there is no ethical consumption under capitalism". It is not that capitalism is immoral, but amoral; completely devoid of moral thinking, only "more money" or "less money". While this can lead to more ethical behavior, it frequently does not.
-
- Nov 2024
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
How would that retracted tweet look when viewed?
This feature similarly already exists on cites like YouTube, that allows you to slightly modify your videos or fully edit your comments. There, there's a social expectation to clarify what you are editing, or to only edit if you changed your view. It's also very common to see comments to stop getting likes after they make an edit.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Do you think there are situations where reconciliation is not possible?
One of the most difficult aspects of absolving immoral actions (or, I guess, behaviors that deviate from the social norm) is that those who are called out for their actions can be incapable of viewing their actions as harmful at all, and either double down or blame others for being "sensative" or trying to "cancel" them.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
often use tactics that avoid being illegal
The other thing too is that online language develops so fast, that harassers will always find new ways to harass others if given ANY way to communicate. Strings of letters like "YWNBAW" or seemingly innocent phrases like Keep Yourself Safe are shorthand for transphobia and death threats respectively. And even if you were to ban key phrases, those phrases are often used jokingly by those most likely to be targeted, saying it to friends that they trust.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
.”
Fake crowds are a serious problem, especially when they are one community masking as another. Similar to what was discussed in previous chapters of users making fake accounts to falsely support arguments, but on a larger scale. In 2023, the worse college campus riots, despite being started by pro-palestinian supporters, were turned violent by apolitical counter protesters (some of which that have ties to nazism)
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Power Users and Lurkers
To me this is a prime example of how much power a "vocal minority" can have over the perception of a group of people; this is the same effect of "one bad apple spoils the bunch", or how entire communities can be labeled by the actions of a small few.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Fold-It
I've heard of this one! This is the go to example when game designers discuss the power of community and how "gamification" can allow for scientific development. By giving players the challenge to be the most optimal, researchers just... HAVE the most optimal folding patterns.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Have you ever reported a post/comment for violating social media platform rules?
The only time I have reported content was when it 1) is obviously hateful (either to a group or as a targeted attack) and 2) in a way similar to spam or botting, trying to stir up engagement for the sake of it. If someone is just being generally rude, I dislike their post if possible.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
(like movie clips) or child pornography
I remember that for 8chan, a spinoff created by 4chan users who thought its content moderation was TOO overbearing, they used the legal defense of "if it's legal somewhere, it's legal here". This meant that moderation is EXTREMELY lax, and was the host of many types of, typically illegal or restricted, content.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
.
Another healthy way to spend time on social media (and how I tend to use it) is by slotting away time for it, and having a goal. Don't go on X/Insta for the sake of scrolling, but just to use the search bar and look at one specific topic and reactions to it. This isn't a clean solution but does help prevent doom scrolling.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
While there are benefits to venting,
I know from personal experience that even Venting, defined here as towards a consenting audience, can still lead to poor outcomes for both parties. On the side of someone who wants to support their friends, exposing themselves and using brain space dedicated to others' trauma can be taxing, and even itself be a form of trauma. (Why therapists have therapists). Other times, the person who is venting is secretly wanting to have some attention; even if their pain is real, they get jealous when they see others with "lesser" pain be given more sympathy.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Sometimes content goes viral in a way that is against the intended purpose of the original content.
Earlier in the textbook there was an ethical dilemma associated with antagonistic virality, such as trolling; is virality/parody/trolling justified to "punish" a bad actor? When people re-edit stonetoss comics so that it seems he is in support of trans rights and pro-immigration, is this ethical? Is it anymore ethical to re-edit a stonetoss pro-crypto currency comic so that the final panel says 'amogus', losing all possible political edge?
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Different designs of social media platforms will have different consequences in what content has viral,
In this way, that's how in-jokes and some posts can become viral within communities. Would "virality" be considered different depending on whether it is viral for just the one person, or viral for the community? ("Breaching containment")
-
- Oct 2024
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Fig. 11.1 A tweet highlighting the difference between structural problems (systemic analysis) and personal choices (individual analysis).
People who think like this aren't dismissing the victim is having an issue; from my experience, they recognize the difference between structural problems and personal problems, they just don't recognize that the system can even be flawed, just that it creates problems for individuals, and if a system is causing problems for individuals, then there must be something that they're doing wrong that others aren't.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
What responsibilities do you think social media platforms should have in what their recommendation algorithms recommend
While the stated answer would be algorithms that allow users to most see what they want, the business answer would be the content that keeps users to the platform longest so they see more ads. This means that algorithms kill moderacy and highlight extremes, meaning only the most popular and the most hated content are known. If sites have no way to show dissentiment about a post (such as through a dislike button) When you try to alter this to be more ethical, you can try to hide the content that is the most hated, but then this creates echo chambers; not eliminating extremism, but keeping pockets of it alive.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
alt-text
One unfortunate use-case of alt-text is as an alternate caption. While this is usually done for comedic effect (see: XKCD) it sacrifices the accessibility that alt-text was intended to provide. And even though not everything is for all audiences (a digital painting is not for a blind audience by design) it does mean that many images that could have benefited from alt-text such as comics or stock images aren't using it for its intended purpose.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
has a disability in that situation.
It is interesting how "in that situation" is specified. In this way of thinking (which I agree with) the more people that a product/service accommodates, the less people are "disabled" to use it. However, I disagree with the usage of disability here; the difference between being disabled (the adjective) is a way to describe interactions, versus having a disability (the noun).which is a condition. So you can have a disability but not always be disabled by it, or you can be disabled at one point without having a disability. Language is annoying.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
But social media companies often fail at keeping our information secure.
I once heard a lecture from a cybersecurity expert, and they had us understand that cybersecurity is a matter of balancing needs against profit. The example he gave was that if you had $100, you wouldn't buy a $100 locked box to keep it safe, maybe a 5 or 15 dollar lock. So it's never a matter of what is the "most" secure, but what can be the most practically secured.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
In some cases we might want a social media company
Another example of "private" data you want companies to have access to may be your user meta-data. In the case that you lose access to your account, you would want to be able to recover it via a password change.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
For example, social media data about who you are friends
Not social media, but Google is egregious at using dystopian tactics to determine your profile. There is a famous study (that has sense been replicated in a livestream) where typing the number of letters required to autofill for "dog toys" in google is highly dependent on how frequently your device's microphone has picked up dog related words. Anecdotally, I know it's not just microphone data, but keystroke data as well.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Try this yourself and see what Google thinks of you!
I've seen funny combinations among friends; especially those who are artists. Those who often go to baby-naming websites get targeted as middle aged conservative women, instead of just authors.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
For each theory, think of how many different ways the theory could hook up with the example.
A consequentialist perspective that aligns politcally with the protest could believe that any harm done by the trolling would be outweighted by the impact the protest had, therefor believing it to be ethical. A Kantian perspective believe that acting disingenuously or maliciously, regardless of reason, is always unethical.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Punish or stop
One example of Punishment via Trolling that comes to mind is during 2021, Texas opened an abortion reporting site. However, TikTokers from Texas started broadcasting to the fans to overload the website with thousands of fake messages, complete with local zip codes. In this way, people were encouraged to troll in the name of punishing anti-abortion services.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Anonymity can also encourage authentic behavior.
It is for this reason I feel that people are resistant to ID requirements to sign up for a social media. Many countries are considering requiring a driver's license or ID to access 18+ or even 13+ material. While even an email could possibly provide a name via a thorough investigation, it's still a manual barrier that prevents a hacker to easily dox large groups of people. However providing legal name and address to a third-party platform holder, even if secure, gives each user 1 less layer of protection in case of a data breach. (Not to mention that people already view platform holders as THE antagonistic force.)
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
(lonelygirl15)
lonelygirl15 was one of the first major examples of the "Unreality" genre on the internet, following the footsteps of Blair Witch Project and inspiring future works like Slenderman. The appeal of unreality is similar to satire, tricking the audience into believing what they were watching was real, but for suspense or horror instead of comedy. But understandably, for those who aren't "in on it" can feel betrayed if what they were being fed was a lie, especially in the infancy of a new medium (online video).
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
8Chan is also the source and home of the false conspiracy theory QAnon
I watched a documentary specifically on the QAnon conspiracy, and the documentary shifts focus early on to focusing on 8chan. It's very unnerving watching how these people use their own platform and QAnon to make the most profit, regardless of the societal harm they are causing. It is partially for this reason the documentary accuses the founders/admins of 8chan to, if not QAnon themselves, be working in collaboration with them.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Graffiti and other notes left on walls were used for sharing updates, spreading rumors, and tracking accounts
This reminds me of a joke popular in WWI, where soldiers found graffiti stating "Kilroy was here" with a long-nosed bald man peeking over something. It was a simple doodle to recreate, so the graffiti spread with multiple actors contributing.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Design a social media site
(Not designing one myself, but breaking down a funny social media sight that I've seen) Pithee was a sight designed as a way to view and rank "shitposts" at a rapid pace. To accomplish this, the sight is laid out with a banner and 5 blocks in the middle. The banner is small, has the logo and donation links in the corners, and has leaderboard/profile/post in center. The 5 blocks have 4 randomized posts by other users, the "most voted on winner" from the last 15 minutes at the top, and a shuffle button at the bottom.
This layout prioritizes reading anonymous user's posts, deprioritizes users' personal scores, and makes branding and donation opportunities only for those who want to support the platform.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Now, there are many reasons one might be suspicious about utilitarianism as a cheat code for acting morally, but let’s assume for a moment that utilitarianism is the best way to go. When you undertake your utility calculus, you are, in essence, gathering and responding to data about the projected outcomes of a situation. This means that how you gather your data will affect what data you come up with. If you have really comprehensive data about potential outcomes, then your utility calculus will be more complicated, but will also be more realistic. On the other hand, if you have only partial data, the results of your utility calculus may become skewed. If you think about the potential impact of a set of actions on all the people you know and like, but fail to consider the impact on people you do not happen to know, then you might think those actions would lead to a huge gain in utility, or happiness.
This reminds me most of measuring value of life in systems such as trolley problems or AI car decision making. Is a doctor more worthy of being saved than a musician? Or a depressed person? Or a felon? Where do you draw the line? If you draw a line, how many "felon lives" equals one doctor life? Utilitarianism to me isn't a morality system itself but a coping mechanism to allow humans to rationalize tough decisions. But when humans put the same logic in computers, it's not a coping strategy for a computer's feelings, but just a flawed series of priorities.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
But, since the donkey does not understand the act of protest it is performing, it can’t be rightly punished for protesting.
This is a clever analogy, however the main difference between Oman Donkeys and social media bots is that in Oman, there isn't an industry set up specifically to farm and distribute donkeys for messaging. However for bots, you can purchase and order pre-made bots to push your own agenda, and from the past 10 years, we see that this has real affects on our cultural landscape.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
3.2.3. Corrupted bots
This reminds me a lot of current LLMs like ChatGPT. The data that it's trained on has to be heavily moderated, and while key words or tokens can be blocked, a bot won't be able to adapt to how quickly humans change language. Anecdotally, friends have been able to convince some bots to tell people to "keep yourself safe/KYS", a play on a common term for online harassment. And even then, this phrase is still used jokingly. By the time a bot has been able to 1) determine what it really means and 2) tell which context it's being used in, humans will have already developed new language.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
The ends justify the means. Utilitarianism: “It is the greatest happiness of the greatest number that is the measure of right and wrong.”
How consequentialists (or even those who use consequentialism to justify immoral actions) measure outcomes always seems to me as brutish. To me it's not only a matter of whether this action does more good than this action, but even action itself has inherit good or evil into it. If one were to do an outright evil action in order for a good outcome, they still committed an evil action. When it comes to social media, consequentialism tends to be used as a coping strategy to defend the immoral actions of others; when there's an in-group and an out-group, you can't believe anyone in YOUR group is wrong.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
One widespread
Interesting how "The Golden Rule" probably independently developed across several different religions, cultures, and regions. Does this suggest something innately universal about it? Or better yet, if it's so universal, why does it have to be repeated? Why is it easy to forget in the online space? Is it not as "Golden" as we thought?
-