- Dec 2023
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
21.3.1. As a Social Media User#
The call to become a more informed user is particularly crucial in an age where misinformation and emotional manipulation are rampant online. Understanding strategies used to spread content, especially negative or provocative material, is important. This knowledge empowers users to make conscious choices about their engagement, helping them to avoid unintentional amplification of harmful content. Moreover, this awareness fosters a more responsible and ethical approach to social media use. Recognizing the intent behind posts -- whether they're seeking genuine interaction or merely trying to provoke for virality's sake -- allows users to contribute to a healthier online environment. This informed approach not only protects the individual user but also enhances the overall quality of social discourse.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
21.2. Ethics in Tech
The story of Thamus critiquing Theuth's invention of writing illustrates how technology can simultaneously advance and impair human abilities -- enhancing communication but potentially weakening memory and understanding. Similarly, the Luddites' struggle wasn't anti-technology but a protest against its dehumanizing use, raising questions about who benefits or suffers from technological progress. These narratives demonstrate the necessity for ethical foresight in technological development. As we embrace more transformative technologies like AI, we must balance innovation with empathy and fairness, ensuring technology serves humanity's broader interests and aligns with our core values.
-
- Nov 2023
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
20.4. Mark Zuckerberg’s “Benevolent” Goals
The imagery of Zuckerberg amidst a crowd of brown children in India strikes me as a form of digital colonialism. While the aim of global connectivity has its merits, it's hard to ignore the potential benefits for Meta in terms of influence and control. Zuckerberg and Sandberg's claims of benevolence seem overshadowed by the inherent power dynamics and business interests involved. It's important to stay critical of the motives behind such initiatives and their impact on privacy, autonomy, and cultural diversity.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
19.2.1. Surveillance Capitalism
The idea of Surveillance Capitalism, especially in the context of Meta's practices, really hits close to home for many of us. It's a reminder that our online interactions, which we often consider private, are actually commodities in a larger economic system. The example, where companies target ads based on extremely sensitive or controversial criteria, is not just a privacy violation, but it feels like a deep personal betrayal. It's unnerving to think about how our data – our digital footprints – can be used in ways we never intended or consented to. This reality calls for a more conscientious approach to how we share information online and a demand for greater transparency and ethical practices from these tech giants.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
18.3.2. Schadenfreude
The concept of schadenfreude in public shaming, especially in celebrity culture, reflects a troubling aspect of human nature where we sometimes find entertainment in others' misfortunes. This phenomenon, as satirized by The Onion, isn't just about holding people accountable; it's more about a collective indulgence in the fall of those in the spotlight. It personally reminds me to question my reaction to such public downfalls. Are we seeking justice? Or are we just entertained? The introspection is crucial in an era where social media often blurs the line between accountability and entertainment, urging a more empathetic and thoughtful approach to how we perceive and react to the public missteps of others.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
While public criticism and shaming have always been a part of human culture, the Internet and social media have created new ways of doing so.
The shift of public criticism and shaming to social media is a double-edged sword. While it empowers us to call out wrongs and promote justice, it also amplifies the harshness and reach of criticism, often leading to disproportionate shaming. The anonymity of the internet can fuel a more severe and unforgiving form of judgement, and the digital permanence of these criticism can cause lasting harm. This phenomenon demonstrates the need for a balance between holding people accountable and exercising empathy and thoughtfulness in our online interactions.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
When do you think crowd harassment is justified (or do you think it is never justified)?
I think crowd harassment is never justified, because it violates individual dignity and undermines the principles of respect and empathy. Even in the pursuit of a cause, there are always more constructive and respectful ways to address issues than resorting to harassment. Upholding respect for all individuals, regardless of the situation, is crucial to maintaining a civil and empathetic society.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
We tend to think of violence as being another “normatively loaded” word, like authenticity. But where authenticity is usually loaded with a positive connotation–on the whole, people often value authenticity as a good thing–violence is loaded with a negative connotation. Yes, the doctor setting the bone is violent and invasive, but we don’t usually call this “violence” because it is considered to be a legitimate exercise of violence. Instead, we reserve the term “violence” mostly for describing forms of interference that we consider to be morally bad.
The reflection on "violence" versus "authenticity" demonstrates how language is shaped by our moral judgments. While authenticity is positively viewed, violence typically carries negative connotations. The example of a doctor setting a bone illustrates this: an act that could be seen as violent is perceived as healing due to its benevolent intent. This illustrates how our use of terms like "violence" reflects not just physical actions but our ethical perspectives on those actions.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Crowdsourcing Definition#
Crowdsourcing and crowdfunding, as modern incarnations of human collaboration, fascinate me. They epitomize how the internet has revolutionized our ability to connect and contribute to collective endeavors. The concept that individuals from around the globe, many of whom have never met, can come together to edit an article on Wikipedia or fund a groundbreaking project on Kickstarter is incredibly powerful. It democratizes the process of creation and innovation, breaking down geographical and social barriers. As someone who witnesses and participates in these phenomena, I see them as a testament to the collective intelligence and generosity of the human spirit. Yet, they also bring challenges, like ensuring quality and managing diverse opinions.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
16.3.4. Crowd harassment
It's alarming how quickly a digital mob can form and act, wielding the collective power to track, identify, and intrude into someone's life. Sacco's experience, where crowds tracked her flight and eagerly awaited her reaction, illustrates a disturbing breach of personal boundaries under the guise of digital vigilantism. While the internet is a tool for collective action and awareness, it also poses the risk of amplifying impulsive, mob-like behaviors. This incident is a reminder that the digital world, for all its benefits, can quickly become a realm where privacy is compromised and actions have real, often disproportionate, consequences.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
What support should content moderators have from social media companies and from governments? Do you think there are ways to moderate well that involve less traumatizing of moderators or taking advantage of poor people?
Reflecting on this issue from my perspective, I believe social media companies should prioritize the mental health and fair working conditions of content moderators. This means ensuring they have regular access to counseling and mental health services, alongside fair compensation and a supportive work environment. As for the role of governments, they should enforce fair labor practices and fund research into the psychological effects of content moderation.
In terms of reducing trauma for moderators, I see great potential in integrating advanced AI to filter out inappropriate content, thereby lessening their exposure to distressing material. However, it's important to maintain human oversight to balance the efficiency of AI with the nuanced judgment that only humans can provide. This approach could significantly reduce the volume of harmful content that moderators have to deal with, making their work less traumatizing.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
YouTube comments can be a place to find these, particularly replies to comments. It’s hard to know where the spammers are currently getting away with spam, but you might try the latest honest trailer from ScreenJunkies, sort comments by “newest first” and then look for replies and see if any are spam. If you find one, try reporting it. What did you think of the options you were given for reporting spam?
Reflecting on the strategy to spot and report spam in YouTube comments, I find it a smart approach, particularly in high-traffic areas like ScreenJunkies' videos. Sorting by "newest first" is an effective way to catch recent, unmoderated spam.
Regarding reporting options, they're usually straightforward but sometimes lack nuance. Options like "Report Spam or Abuse" are simple but may not fully capture the complexity of certain comments. Additionally, the effectiveness of these tools often depends on the platform's response, which can be slow or inconsistent. In my experience, I appreciate the availability of these reporting mechanisms, but there's room for improvement in responsiveness and the granularity of reporting options.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
For example, Facebook has a suicide detection algorithm, where they try to intervene if they think a user is suicidal (Inside Facebook’s suicide algorithm: Here’s how the company uses artificial intelligence to predict your mental state from your posts). As social media companies have tried to detect talk of suicide and sometimes remove content that mentions it, users have found ways of getting around this by inventing new word uses, like “unalive.”
The evolution of language on social media, like the use of coded terms such as "unalive" to discuss sensitive topics, illustrates the complex dynamics between technology and human communication. While the intention behind such algorithms, like Facebook's suicide detection system, is to provide support and intervention, users' adaptability raises intriguing questions about the limitations of automated content moderation. How can social media platforms strike a balance between protecting users and respecting their freedom of expression? Moreover, it underscores the importance of addressing the underlying issues related to mental health and offering more comprehensive support beyond just automated content removal.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Trauma dumping can be bad for the mental health of those who have this trauma unexpectedly thrown at them, and it also often isn’t helpful for the person doing the trauma dumping either:
I completely agree with this perspective on trauma dumping. It's a complex issue that can have negative effects on both the person sharing their trauma and the person unexpectedly receiving it. It raises questions about the importance of creating safe spaces for sharing and supporting those with trauma, where both parties can benefit. How can we foster more empathetic and constructive conversations around trauma? And what resources and guidelines should be available to handle these situations in a way that promotes healing and understanding for all involved?
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
The online community activity of copying and remixing can be a means of cultural appropriation, which is when one cultural group adopts something from another culture in an unfair or disrespectful way (as opposed to a fair, respectful cultural exchange). For example, many phrases from Black American culture have been appropriated by white Americans and had their meanings changed or altered (like “woke”, “cancel”, “shade”, “sip/spill the tea”, etc.).
The online culture of copying and remixing is indeed a complex area where cultural appropriation can often arise. It's crucial to distinguish between cultural appreciation and appropriation to ensure a respectful exchange of ideas and customs. Your example of phrases from Black American culture being appropriated is particularly relevant. When such expressions are taken out of their original context without understanding or respect, it perpetuates a cycle of disrespect and erasure.
To foster genuine cultural exchange, it's essential to engage with other cultures with a willingness to learn, understand, and appreciate the significance of what's being borrowed. In an era where our digital actions can have far-reaching effects, acknowledging the roots and cultural significance of shared content is not only respectful but also a step towards breaking down harmful patterns of appropriation.
-
How do you think attribution should work when copying and reusing content on social media (like if you post a meme or gif on social media)? When is it ok to not cite sources for content? When should sources be cited, and how should they be cited? How can you participate in cultural exchange without harmful cultural appropriation?
On social media, I believe it's important to provide attribution when I'm copying or reusing someone else's content, like images, memes, or GIFs. It's a sign of respect for the original creator, and I prefer to err on the side of giving credit, even when I'm unsure. While there are cases when it's okay not to cite sources, especially for widely recognized or publicly available content, it's still a good practice. For any direct use or remixing of creative work, sharing news, or academic content, I make sure to cite sources.
When it comes to citing sources, I find it helpful to include the creator's name or username, a link, or relevant hashtags. In cultural exchange, I make an effort to research and understand the culture I'm interested in, engage with its members, and be mindful of the context in which I'm using cultural elements. I avoid using sacred or meaningful symbols casually or for commercial purposes, and I always attribute the source when sharing cultural content, while being cautious about stereotypes or misrepresentation to participate respectfully.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Other strategies include things like:
These tactics make us wonder about the ethical side of social media engagement. Are we compromising trust for short-term gains, and can platforms find a way to keep it real while boosting content visibility? The ever-changing digital landscape adds another layer to this, making us question the sustainability of these strategies in the long run.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Elon Musk’s view expressed in that tweet is different than some of the ideas of the previous owners, who at least tried to figure out how to make Twitter’s algorithm support healthier conversation. Though even modifying a recommendation algorithm has limits in what it can do, as social groups and human behavior may be able to overcome the recommendation algorithms influence. { requestKernel: true, binderOptions: { repo: "binder-examples/jupyter-stacks-datascience", ref: "master", }, codeMirrorConfig: { theme: "abcdef", mode: "python" }, kernelOptions: { kernelName: "python3", path: "./ch11_recommendations" }, predefinedOutput: true } kernelName = 'python3'
Recommendation algorithms can produce biased outcomes, and responsibility lies with both users and the system. Elon Musk's tweet highlights a shift in perspective, emphasizing the role of platform owners. However, algorithm modifications have limits, and the challenge remains in finding the right balance between user responsibility and systemic improvements. How to strike this balance effectively is a pressing question.
-
- Oct 2023
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
We mentioned Design Justice earlier, but it is worth reiterating again here that design justice includes considering which groups get to be part of the design process itself.
Reiterating the importance of Design Justice, it reminds us to ask who is at the design table. It's about acknowledging that the design process itself should be diverse and inclusive. This insight emphasizes that authentic, equitable design justice requires not just designing for communities but designing with them, ensuring their voices and needs are central in creating solutions that truly serve everyone.
-
In how we’ve been talking about accessible design, the way we’ve been phrasing things has implied a separation between designers who make things, and the disabled people who things are made for. And unfortunately, as researcher Dr. Cynthia Bennett points out, disabled people are often excluded from designing for themselves, or even when they do participate in the design, they aren’t considered to be the “real designers.” You can see Dr. Bennet’s research talk on this in the following Youtube Video:
Dr. Cynthia Bennett's observations had me consider some critical questions and insights about accessible design. How can we bridge the gap between designers and the disabled community to ensure their equal participation in the design process? What structural and cultural changes are needed to acknowledge disabled individuals as "real designers" with valuable insights into their own needs and preferences? Additionally, how can we incorporate diverse perspectives and experiences into the design process to create more holistic solutions that cater to a wider range of abilities and disabilities? Dr. Cynthia Bennett's insights demonstrate the need to break down hierarchies in design, prioritize diverse perspectives, and recognize disability as a valuable form of expertise. These insights encourage a more inclusive and innovative approach to accessible design.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Non-User Information: Social Media sites might collect information about people who don’t have accounts, like how Facebook does
It's concerning that social media sites collect data on non-users, like Facebook does. Our online privacy should be respected, even if we choose not to use these platforms.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
While we have our concerns about the privacy of our information, we often share it with social media platforms under the understanding that they will hold that information securely. But social media companies often fail at keeping our information secure.
Social media is a double-edged sword. While we use social media to connect, we also worry about data breaches and misuse. We should always be cautious about what we share, and that these companies need to step up thier game when it comes to keeping our personal information safe.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
People in the antiwork subreddit found the website where Kellogg’s posted their job listing to replace the workers. So those Redditors suggested they spam the site with fake applications, poisoning the job application data, so Kellogg’s wouldn’t be able to figure out which applications were legitimate or not (we could consider this a form of trolling). Then Kellogg’s wouldn’t be able to replace the striking workers, and they would have to agree to better working conditions.
It's interesting to see how some people are using creative online tactics to support striking workers and push for better working conditions. While the idea of spamming job applications might be seen as a form of trolling, it reflects a digital form of protest and collective action. It remains to be seen how effective these tactics will be in achieving their goals, but it's a reminder of the power of online communities in advocating for change in the real world.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Do you think there is information that could be discovered through data mining that social media companies shouldn’t seek out (e.g., social media companies could use it for bad purposes, or they might get hacked and others could find it)?
When it comes to data mining by social media companies, it's vital to keep things ethical and user-focused. Imagine how you'd want your own personal data handled. We all value our privacy, so it's essential that companies get our clear, informed consent before collecting and using our sensitive information. Security is another big concern; no one wants their data to be at risk from hackers, potentially leading to identity theft or online harassment.
Also, we should be wary of biases in data mining. Sometimes, historical data can be biased, which can lead to unfair or discriminatory outcomes. To build trust, companies should be open about what data they're collecting and why they're using it. They need to follow legal regulations to avoid trouble and protect users' rights.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
There is a reason why stereotypes are so tenacious: they work… sort of. Humans are brilliant at finding patterns, and we use pattern recognition to increase the efficiency of our cognitive processing. We also respond to patterns and absorb patterns of speech production and style of dress from the people around us. We do have a tendency to display elements of our history and identity, even if we have never thought about it before.
Stereotypes endure because we are good at pattern recognition that streamlines cognitive processing and decision-making. These patterns are absorbed from our surroundings, unconsciously influencing our behaviours. While this pattern recognition is efficient, it also underlies the persistence of stereotypes, demonstrating the relationship between innate cognitive processes and the societal context that shapes our perceptions.
-
Every “we” implies a not-“we”. A group is constituted in part by who it excludes. Think back to the origin of humans caring about authenticity: if being able to trust each other is so important, then we need to know WHICH people are supposed to be entangled in those bonds of mutual trust with us, and which are not from our own crew. As we have developed larger and larger societies, states, and worldwide communities, the task of knowing whom to trust has become increasingly large. All groups have variations within them, and some variations are seen as normal. But the bigger groups get, the more variety shows up, and starts to feel palpable. In a nation or community where you don’t know every single person, how do you decide who’s in your squad?
The observation that every "we" implies a "not-we" emphasizes that groups are defined by who they exclude. This concept stems from the need for trust and cooperation within a group. As societies have grown in scale, determining whom to trust has also become more challenging due to the proliferation of diversity within larger communities.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Catfishing: Create a fake profile that doesn’t match the actual user, usually in an attempt to trick or scam someone
This fabricated persona serves different purposes, ranging from personal amusement to more malicious intentions, like fraud or manipulation. Some catfishers aim to deceive and scam others. What's troubling is that these fake profiles can be incredibly convincing, making it challenging for users to differentiate between genuine and fraudulent accounts.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
How do you think about the authenticity of the Tweets that come from Trump himself? Do you think it matters which human typed the Tweet? Does the emotional expression (e.g., anger) of the Tweet change your view of authenticity? How do you think about the authenticity of the Tweets that come from others in Trump’s campaign?
When I evaluate the authenticity of tweets, especially those attributed to figures like Donald Trump, I consider several factors. First and foremost, authorship is crucial. A tweet coming directly from the individual carries a different weight than one from a team managing their account. Emotional expressions like anger provide some insight into the author's mood, but it's not definitive proof of authenticity. Verification, such as the blue checkmark on Twitter, can boost confidence in an account's authenticity.
Account security is a factor to consider, as accounts can be hacked or taken over, potentially compromising authenticity.
As for tweets from campaign members, their authenticity depends on their recognized role and whether they accurately represent the campaign's stance. Although not authored by the main figure, they still reflect the campaign's messaging strategy.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
The user interface of a computer system (like a social media site), is the part that you view and interact with. It’s what you see on your screen and what you press or type or scroll over. Designers of social media sites have to decide how to layout information for users to navigate and decide how the user performs various actions (like, retweet, post, look up user, etc.). Some information and actions will be made larger and easier to access while others will be smaller or hidden in menus or settings.
The UI of a computer system is like the face of your favourite social media site where you click, type and scroll. Behind the scenes, designers decide where everything foes and how one can post or like. The effectiveness of a UI design hinges on how well it facilitates user interactions, offers accessibility to information, and maintains an intuitive and visually appealing layout, all the while consider ethical implications to ensure user well-being and privacy.
-
Designers sometimes talk about trying to make their user interfaces frictionless, meaning the user can use the site without feeling anything slowing them down.
The idea of frictionless design is ostensibly beneficial, with its goal to enhance user satisfaction. However, a frictionless design can sometimes obscure crucial information and actions, causing filter bubbles and echo chambers.
Another ethical concern is the potential for addictive behaviour. The frictionless experience encourages endless scrolling and incessant notifications. Designers bear the ethical responsibility of promoting healthy usage patterns and avoiding the creation of digital addiction.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
This means that how you gather your data will affect what data you come up with. If you have really comprehensive data about potential outcomes, then your utility calculus will be more complicated, but will also be more realistic. On the other hand, if you have only partial data, the results of your utility calculus may become skewed. If you think about the potential impact of a set of actions on all the people you know and like, but fail to consider the impact on people you do not happen to know, then you might think those actions would lead to a huge gain in utility, or happiness.
this quote demonstrates the profound impact of data quality and coverage on our perception of reality and the consequences of our actions. When we possess high-quality data about potential outcomes, our utility calculus becomes more realistic. In such cases, we are better equipped to assess the full range of potential impacts and make informed decisions. The comprehensive understanding allows us to weigh the pros and cons, fostering more balanced choices.
On the other hand, when our data is incomplete, our utility calculus may be skewed. In this case, we might overestimate the potential benefits/underestimate the drawbacks of our actions. This skewed perception arises from the limited information we have, which fails to capture the full scope of potential consequences.
Furthermore, when we consider the impact of our actions primarily based on those we know and like, while neglecting the well-being of people we don't happen to know, we risk making choices that lead to a biased assessment of utility. This narrow perspective can result in decision that favour one group at the expense of another.
Ultimately, this quote highlights of data-driven decision making that considers not only the readily available info but also strives for a more inclusive understanding of potential outcomes
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
” Twitter has repeatedly said that spam bots represent less than 5% of its total user base. [Elon] Musk, meanwhile, has complained that the number is much higher, and has threatened to walk away from his agreement to buy the company.” Musk’s Dispute With Twitter Over Bots Continues to Dog Deal, by Kurt Wagner, Bloomberg July 7, 2022 The data in question here is over what percentage of Twitter users are spam bots, which Twitter claimed was less than 5%, and Elon Musk claimed is higher than 5%. Data points often give the appearance of being concrete and reliable, especially if they are numerical. So when Twitter initially came out with a claim that less than 5% of users are spam bots, it may have been accepted by most people who heard it. Elon Musk then questioned that figure and attempted to back out of buying Twitter, and Twitter is accusing Musk’s complaint of being an invented excuse to back out of the deal, and the case is now in court.
Different definitions of what constitutes a "spam bot" and variations in identification methods can lead to different conclusions. The dispute's escalation to a potential lawsuit also demonstrates the importance of clear and transparent contracts and agreements. Another complexity of this dispute is the publicity of it. Public opinion can sway the course of such disputes and influence the reputations and perhaps decisions of the parties involved.
It is interesting to learn that ostensibly concrete data can be contested and their consequences extend beyond numbers, but legal and financial implications
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
How does allowing bots influence social media sites’ profitability?
Allowing bots on social media platforms significantly bolsters their profitability. Bots are adept at driving increased user engagement by providing continuous and personalized interactions. They respond to user queries promptly and offer content recommendations tailored to individual preferences, effectively keeping users active on the platform. This heightened user engagement is especially attractive to advertisers, who are willing to pay more for ad space when they see higher user activity. Moreover, bots find application in customer support, handling a substantial volume of inquiries efficiently. This not only improves user satisfaction but also reduces customer service costs, indirectly benefiting the platform's profitability.
-
Bots present a similar disconnect between intentions and actions. Bot programs are written by one or more people, potentially all with different intentions, and they are run by others people, or sometimes scheduled by people to be run by computers. This means we can analyze the ethics of the action of the bot, as well as the intentions of the various people involved, though those all might be disconnected.
The disconnection highlights the need to scrutinize not only the actions performed by bots but also the underlying intentions of their creators. It reminds us that while some bot developers may have benevolent objectives, others may have less honorable aims, which can result in bots being used for malicious purposes. This complexity of running of bot makes it essential to assess the ethical implications of how bots are used and the potential consequences of their actions.
This text serves as a reminder for robust ethical considerations and critical analysis when dealing with automated systems and their societal impacts.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Different groups have different sets of virtues: Quaker SPICES (Simplicity, Peace, Integrity, Community, Equality, Stewardship) US Army LDRSHP: Loyalty, Duty, Respect, Selfless Service, Honor, Integrity, Personal Courage
I disagree with this framework in the sense that it highlights the fundamental differences in values between the two groups. While these sets of virtues do reflect the distinct objectives of the Quaker tradition and the military, there is some overlapping in between them.
For instance, both frameworks emphasize integrity as one of their core values. Since integrity involves honesty, responsibility, hard work, and many more qualities that are universally regarded as virtues, the two frameworks are fundamentally the same, even with different objectives.
-