- Last 7 days
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Fig. 11.1 A tweet highlighting the difference between structural problems (systemic analysis) and personal choices (individual analysis).
People who think like this aren't dismissing the victim is having an issue; from my experience, they recognize the difference between structural problems and personal problems, they just don't recognize that the system can even be flawed, just that it creates problems for individuals, and if a system is causing problems for individuals, then there must be something that they're doing wrong that others aren't.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
What responsibilities do you think social media platforms should have in what their recommendation algorithms recommend
While the stated answer would be algorithms that allow users to most see what they want, the business answer would be the content that keeps users to the platform longest so they see more ads. This means that algorithms kill moderacy and highlight extremes, meaning only the most popular and the most hated content are known. If sites have no way to show dissentiment about a post (such as through a dislike button) When you try to alter this to be more ethical, you can try to hide the content that is the most hated, but then this creates echo chambers; not eliminating extremism, but keeping pockets of it alive.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
alt-text
One unfortunate use-case of alt-text is as an alternate caption. While this is usually done for comedic effect (see: XKCD) it sacrifices the accessibility that alt-text was intended to provide. And even though not everything is for all audiences (a digital painting is not for a blind audience by design) it does mean that many images that could have benefited from alt-text such as comics or stock images aren't using it for its intended purpose.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
has a disability in that situation.
It is interesting how "in that situation" is specified. In this way of thinking (which I agree with) the more people that a product/service accommodates, the less people are "disabled" to use it. However, I disagree with the usage of disability here; the difference between being disabled (the adjective) is a way to describe interactions, versus having a disability (the noun).which is a condition. So you can have a disability but not always be disabled by it, or you can be disabled at one point without having a disability. Language is annoying.
-
- Oct 2024
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
But social media companies often fail at keeping our information secure.
I once heard a lecture from a cybersecurity expert, and they had us understand that cybersecurity is a matter of balancing needs against profit. The example he gave was that if you had $100, you wouldn't buy a $100 locked box to keep it safe, maybe a 5 or 15 dollar lock. So it's never a matter of what is the "most" secure, but what can be the most practically secured.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
In some cases we might want a social media company
Another example of "private" data you want companies to have access to may be your user meta-data. In the case that you lose access to your account, you would want to be able to recover it via a password change.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
For example, social media data about who you are friends
Not social media, but Google is egregious at using dystopian tactics to determine your profile. There is a famous study (that has sense been replicated in a livestream) where typing the number of letters required to autofill for "dog toys" in google is highly dependent on how frequently your device's microphone has picked up dog related words. Anecdotally, I know it's not just microphone data, but keystroke data as well.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Try this yourself and see what Google thinks of you!
I've seen funny combinations among friends; especially those who are artists. Those who often go to baby-naming websites get targeted as middle aged conservative women, instead of just authors.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
For each theory, think of how many different ways the theory could hook up with the example.
A consequentialist perspective that aligns politcally with the protest could believe that any harm done by the trolling would be outweighted by the impact the protest had, therefor believing it to be ethical. A Kantian perspective believe that acting disingenuously or maliciously, regardless of reason, is always unethical.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Punish or stop
One example of Punishment via Trolling that comes to mind is during 2021, Texas opened an abortion reporting site. However, TikTokers from Texas started broadcasting to the fans to overload the website with thousands of fake messages, complete with local zip codes. In this way, people were encouraged to troll in the name of punishing anti-abortion services.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Anonymity can also encourage authentic behavior.
It is for this reason I feel that people are resistant to ID requirements to sign up for a social media. Many countries are considering requiring a driver's license or ID to access 18+ or even 13+ material. While even an email could possibly provide a name via a thorough investigation, it's still a manual barrier that prevents a hacker to easily dox large groups of people. However providing legal name and address to a third-party platform holder, even if secure, gives each user 1 less layer of protection in case of a data breach. (Not to mention that people already view platform holders as THE antagonistic force.)
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
(lonelygirl15)
lonelygirl15 was one of the first major examples of the "Unreality" genre on the internet, following the footsteps of Blair Witch Project and inspiring future works like Slenderman. The appeal of unreality is similar to satire, tricking the audience into believing what they were watching was real, but for suspense or horror instead of comedy. But understandably, for those who aren't "in on it" can feel betrayed if what they were being fed was a lie, especially in the infancy of a new medium (online video).
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
8Chan is also the source and home of the false conspiracy theory QAnon
I watched a documentary specifically on the QAnon conspiracy, and the documentary shifts focus early on to focusing on 8chan. It's very unnerving watching how these people use their own platform and QAnon to make the most profit, regardless of the societal harm they are causing. It is partially for this reason the documentary accuses the founders/admins of 8chan to, if not QAnon themselves, be working in collaboration with them.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Graffiti and other notes left on walls were used for sharing updates, spreading rumors, and tracking accounts
This reminds me of a joke popular in WWI, where soldiers found graffiti stating "Kilroy was here" with a long-nosed bald man peeking over something. It was a simple doodle to recreate, so the graffiti spread with multiple actors contributing.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Design a social media site
(Not designing one myself, but breaking down a funny social media sight that I've seen) Pithee was a sight designed as a way to view and rank "shitposts" at a rapid pace. To accomplish this, the sight is laid out with a banner and 5 blocks in the middle. The banner is small, has the logo and donation links in the corners, and has leaderboard/profile/post in center. The 5 blocks have 4 randomized posts by other users, the "most voted on winner" from the last 15 minutes at the top, and a shuffle button at the bottom.
This layout prioritizes reading anonymous user's posts, deprioritizes users' personal scores, and makes branding and donation opportunities only for those who want to support the platform.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Now, there are many reasons one might be suspicious about utilitarianism as a cheat code for acting morally, but let’s assume for a moment that utilitarianism is the best way to go. When you undertake your utility calculus, you are, in essence, gathering and responding to data about the projected outcomes of a situation. This means that how you gather your data will affect what data you come up with. If you have really comprehensive data about potential outcomes, then your utility calculus will be more complicated, but will also be more realistic. On the other hand, if you have only partial data, the results of your utility calculus may become skewed. If you think about the potential impact of a set of actions on all the people you know and like, but fail to consider the impact on people you do not happen to know, then you might think those actions would lead to a huge gain in utility, or happiness.
This reminds me most of measuring value of life in systems such as trolley problems or AI car decision making. Is a doctor more worthy of being saved than a musician? Or a depressed person? Or a felon? Where do you draw the line? If you draw a line, how many "felon lives" equals one doctor life? Utilitarianism to me isn't a morality system itself but a coping mechanism to allow humans to rationalize tough decisions. But when humans put the same logic in computers, it's not a coping strategy for a computer's feelings, but just a flawed series of priorities.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
But, since the donkey does not understand the act of protest it is performing, it can’t be rightly punished for protesting.
This is a clever analogy, however the main difference between Oman Donkeys and social media bots is that in Oman, there isn't an industry set up specifically to farm and distribute donkeys for messaging. However for bots, you can purchase and order pre-made bots to push your own agenda, and from the past 10 years, we see that this has real affects on our cultural landscape.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
3.2.3. Corrupted bots
This reminds me a lot of current LLMs like ChatGPT. The data that it's trained on has to be heavily moderated, and while key words or tokens can be blocked, a bot won't be able to adapt to how quickly humans change language. Anecdotally, friends have been able to convince some bots to tell people to "keep yourself safe/KYS", a play on a common term for online harassment. And even then, this phrase is still used jokingly. By the time a bot has been able to 1) determine what it really means and 2) tell which context it's being used in, humans will have already developed new language.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
The ends justify the means. Utilitarianism: “It is the greatest happiness of the greatest number that is the measure of right and wrong.”
How consequentialists (or even those who use consequentialism to justify immoral actions) measure outcomes always seems to me as brutish. To me it's not only a matter of whether this action does more good than this action, but even action itself has inherit good or evil into it. If one were to do an outright evil action in order for a good outcome, they still committed an evil action. When it comes to social media, consequentialism tends to be used as a coping strategy to defend the immoral actions of others; when there's an in-group and an out-group, you can't believe anyone in YOUR group is wrong.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
One widespread
Interesting how "The Golden Rule" probably independently developed across several different religions, cultures, and regions. Does this suggest something innately universal about it? Or better yet, if it's so universal, why does it have to be repeated? Why is it easy to forget in the online space? Is it not as "Golden" as we thought?
-