87 Matching Annotations
  1. Nov 2023
    1. AI is finding its value not in its ability to fight but in its value to protect and defend.

      I would hardly say it's assisting in protecting or defending, especially given the reasoning spelled out in this article. Is it a helpful tool for people to possibly find their loved ones, yes. But to act as though its assisting with front line protection and defense I would say is entirely false.

    2. Since Hamas has so disgustingly yet proudly filmed and uploaded its atrocities, advanced technologies can use facial recognition tools to identify the faces of the kidnapped victims and try to assess their location or physical state.

      This is what brings "peace"? Yes it is incredibly helpful amidst the conflict but to say this brings peace is a BIG utopian stretch

    3. History shows us there is no need for AI to innovate misinformation. Recycled images of war and tools like Photoshop are adequate enough to falsify evidence and skew narratives.

      This is a key point in both articles, except this article still considers AI to be helpful in the conflict whereas the other article merely says its more subtly malicious. This article feels more black and white in regards to what is good or bad

    4. Keyboard warriors worked tirelessly to spread misinformation, upload unrelated photos of different warzones, and challenge the authenticity of images.

      "Keyboard warriors" is very dystopian and produces an sort of moral panic intonation on this sentence

    5. Algorithms may analyze and track, but bullets will penetrate and kill.

      The separation of technology and the harm of weapons is interesting given the tendency for them to be so coincided in society and even in missiles used in this particular conflict

    6. The IDF has learned a painful lesson in war, especially how it relates to Hamas: AI is still not on the frontline of the fight.

      Was this truly ever a thought but by the IDF? Seems dramatic

    7. Humans are very good at war without the need for AI

      Opposite of typical personifications of the violence that's present in the technological world, yet war has been present far before that

    8. helps spot misinformation and identify missing loved ones has proven more useful

      More of a utopian perspective that goes against typical moral panic episodes

    1. I think this conflict is worse than what we’ve seen in the past,” says Farid. “And I think gen AI is part of that, but it is not exclusively gen AI. That’s too simplistic.”

      I agree with this statement, the conflict has been pent up since what seems like forever ago. I think gen AI has a part, but as the statement says I wouldn't say it is the only aspect of technology interfering with what is being said about this conflict

    2. ‘Well, wait a minute, if he could have made that one, then that one could be fake, and now these two versions are going around,

      I would argue that this mindset of never being able to tell what is real and what is not is what causes people to not be as fully involved in conflicts such as these or at least not being properly informed on them/wanting to be. If I can't even decipher what is what, why would I concern myself with it? I think this causes so many people to simply stay in their little bubble and not be concerned with worldly affairs

    3. who was behind the missile strike on the Al-Ahli Arab Hospital in Gaza, as well as images of children buried under rubble, some real, some fake.

      Initially it was believed that Hamas merely had a malfunctioned missile, yet after looking through the overly saturated and fake information in media it was found that it was not from Hamas

    4. “For a lot of people, the ability to dismiss inconvenient facts is absolutely playing a role in this conflict,”

      Could you attribute this mindset to media and the intense misinformation that it has spread? This dismissal did not come from no where

    5. Journalists and fact checkers struggle less with deepfakes than they do with out-of-context images or those crudely manipulated into something they’re not, like video game footage presented as a Hamas attack.

      This is incredibly interesting. It's almost like it is less work to simply find a random video and associate it with the conflict vs the deepfakes. I think this makes sense and should be highly considered though, in regards to the video that caused everyone to inquire if the bombing at the Palestinian hospital was from Israel or if it was merely a malfunctioned rocket from Palestine

    6. misinformation only gains power when people see it, and considering the time people have for viral content is finite, the impact is negligible.

      I suppose this is where the affects of AI overstaurating the internet with so much misinformation is how this is considered less "malicious" given the affects, but are we going to talk about how the intent was likely still malicious despite the impact?

    7. “The information space is already being flooded with real and authentic images and footage,” says Mashkoor, “and that in itself is flooding the social media platforms.”

      Oversaturation of these real and fake images and videos create an overall misunderstanding for the "outside world" to understand what's truly going on. Isn't this entirely against the utopian perspective of how AI is supposed to help us in times of conflict?

    8. AI-generated disinformation is being used by activists to solicit support—or give the impression of wider support—for a particular side.

      I find it interesting that this is considered to not be a "central role" in the spread of disinformation. As someone who has studies in Palestine about the Israeli-Palestinian conflict, control of the media and what is disbursed about the two sides has had a huge impact on the conflict, even prior to the October 7th attacks

    9. “There are definitely AI images circulating but not to the degree where I think it’s playing a central role in the spread of information,”

      It's not necessarily as blatantly dystopian as others may think

  2. Oct 2023
    1. by wrestling with questions about the character of our shared world and how we relate to one another as co-inhabitants of physical and digital public spaces

      How can we do that when we can't even agree on what history is or what histories do or do not exist?

    2. When we ask about the proper content targeting goals for social media sites, we surface ancient debates in moral philosophy about truth and access to information in the town square.

      We turn to historical references blindly in regards to wanting to know how best to create this societal structure but forget or even likely ignore the biases and prejudices and erasure that comes with turning back to the past

    3. If you try to assume a neutral stance, to simply build the most accurate tool, the effect will be to reproduce and entrench the underlying patterns of injustice.

      injustice foundations lead to later repetition of injustice in algorithmic patterns

    4. This requires unpacking how computer scientists and engineers define target variables to predict, construct, and label datasets and develop algorithms and training models

      Tracking back to the creators of the technology for biases and problems can ultimately allow us to see such flaws within the technology itself and thus how it influences society and overall democracy

    5. This means the policy solutions we develop to regulate the organizations that use data to make decisions, whether simple linear models, machine learning, or even perhaps AI, should be quite different in policing, finance, and social media companies.

      There needs to be policies that are adaptable to each policing, financial, and social media companies

    6. We shouldn’t expect the answers to be the same across different questions.

      A machine does not contain the capability of realizing how the same situations on paper could truly be entirely different, as that is only a human capability at times. So why do we just throw algorithms at technology and force it to make decisions as if they are all the same? Are we all the same? Of course not, which is why we shouldn't have technology treat us as such

    1. This includes the effective protection of electoral processes in the digital age, and the compliance of Big Tech corporations and other actors with electoral laws and procedures

      AKA don't mess with people's rights nor interfere with how they carry out their rights in any way.

    2. exiting human rights protections must be updated and complemented by new national laws and international agreements

      AKA lets continue to modernize how these algorithms interpret the modernization of the laws and agreements that are put in place.

    3. Such participatory initiatives connect civil society and individual citizens with the digital age, and raise knowledge and awareness of the ethical and political challenges of AI.

      AKA lets educate our citizens consistently so they actually see the impact that AI is having on their life in the good and the bad.

    4. How we respond to AI’s challenge to democracy hinges, to some degree, on the way we conceive democracy

      Is this why we are struggling to enact legal action in regards to the use of AI?-that is because we are so politically polarized especially in regard to how we conceive democracy?

    5. promoting ethical guidelines with respect to the use of AI.

      Where are these guidelines in the US? Isn't it ironic that the UK of all places is enacting more democratic guidelines for ethical use of AI than the US which is meant to be laid on the foundation of democracy in its freedom and independence from Europe?

    6. These collaborations can blur the boundaries between the responsibilities of democratic states and the interests of private corporations,

      How much more blurred can this line causing private corporations to contaminate politics and overall democratic ideals.

    7. FRT are persistently lower when used on darker-skinned and female faces, caused by the gender and racial bias of datasets that are used to train this technology.

      Continues societal stereotypes of "all black women look the same" and other prejudices related as people see this as proving these points in how the tech reacts to people of color and females of color

    8. there is evidence that it may deter people from attending public events or participating in protests.

      AKA there is evidence that it affects our natural rights to protest and of course the pursuit of life, liberty, and happiness (that is assuming that we fully had that prior to AI's creation)

    9. digital welfare dystopia’, where access to welfare provisions are based on machine automation and prediction, with vulnerable communities surveilled and punished by faceless algorithms.

      movie-like dystopian perspective even further into the future than typically considered. Here also is how the biases of humans are implanted into technology as it focuses and targets "vulnerable communities"

    10. Can we measure and quantify rights and justice?

      Often it is talked about as if technology will allow true justice without biases as opposed to humans, but reality is there are biases implemented into technological development from the creators of said technology. So is this truly justice either?

    11. who is accountable when a machine takes a decision?

      Exactly! How long are we going to personify the technology and its biases when the issue is indeed the creators and thus their biases and problematic use of said technology

    12. impossible for humans to reconstruct

      AKA out of human control. Control is often seen as free will, both are seen as not only human necessities but democratic fundamentals.

    13. Such alteration is deceitful, and it can undermine the credibility of democratically elected politicians or of those preparing to stand for public office.

      FAKE NEWS. Again one of the biggest fears associated with dystopian perspectives. Can create a moral outrage and affect how people involve themselves in politics and ultimately vote based off of a literal lie.

    14. On the one hand, we benefit from AI-driven communication platforms that facilitate public debate, connect people, and ease the flow of information – all of these characteristics are core elements of democratic discourse and deliberation

      Address of opposing argument that again establishes there are positives of AI, but will there be enough weight to these positives given the weight of the negatives?

    15. But the power of AI also poses serious challenges and indeed threats to our fundamental rights, and to the processes, practices, and institutions of democratic societies.

      Key opinion and thesis

    16. AI provides significant benefits, from medical diagnosis and treatment through to traffic control systems and environmental protection.

      Are these positives worth the risk to democratic ideals that this country holds at the foundation.

    17. AI has the capacity to generate and analyse huge data sets; to guide a diverse group of hardware systems, ranging from mobile phones, robotic vacuum cleaners and intelligent washing machines, to surveillance cameras and autonomous weapons systems

      Yes AI is so often advertised as mere vacuums, washing machines, etc. but read further into the breadth of AI also being present in surveillance and weapons systems. It becomes an entirely different being

    18. Can democracy survive in the age of algorithms and deepfakes? What, if anything, can be done to protect democracy from the worst excesses of AI?

      Thesis point. Something needs to be put in place to allow democracy to thrive. Should nothing be done in administration, nothing will change and our key pillars of democracy could be negatively affected.

    19. helped to identify swing voters and micro-target them with messages that, according to critics, constituted deliberate misinformation.

      The fact that the indentification of swing voters is capable merely based on facebook user information is already outlandish and invasive, much less the fact that messages were actively made to cater to each individual is especially malicious. This again caters to the dystopian criticisms that technology is personified as actively spreading misinformation and deceit.

    20. harvesting of data from over 50 million Facebook users, without their consent or knowledge

      Shows the violation of key democratic freedoms including that of privacy. Also shows the intentional misuse of personal data that are common motifs in polarizing dystopian perspectives of the invasion of people's lives by technology and their human creators using it to influence societal decisions.

    21. target tens of millions of voters across the US with pro-Trump messages

      Showing historical misuse of AI immediately within the article already suggests a sense of both moral panic and was also a large source of moral outrage during and after the elections.

    1. You always have this mismatch between the speed with which technology develops and the pace at which legislation is formed.

      This is incredibly interesting to me. In this world how do we implement laws more quickly on immigration than on technology? That's idiotic to me

    2. Now, in the absence of law, in the absence of lawmakers working out what they want, the industry is moving forward.

      Interesting way of deflecting. I feel as though lawmakers point at tech industries and then tech industries point at lawmakers in regards to "getting the ball rolling". There's also the worry that creating said laws could go against the free range of both invention and accessibility of technology that is also so key to the technological industry

    3. “we know that A.I. is increasingly being used to manipulate voters with tailored content. Can tech companies keep up with this?”

      Interesting that AI develops this as a top question, shows its power in its ability to truly read into the concerns of many people in regards to tech. However, does this show the skewed perspective/prejudice of what AI chooses to ask from or perhaps a skewed technological usage of those who worry about things like this in technology?

    4. Mark Zuckerberg has said that he is going to release the coding for Meta’s A.I.

      Being willing to release this coding would be a HUGE win towards the positive arguments of AI as it helps directly go against arguments of it compromising our freedom and being some out of touch piece of technology

    5. we should be agnostic about whether it’s generated by a machine or a human being.

      This seems incredibly problematic. Almost saying we shouldn't care about the potential of whether or not tech takes over human responsibility and jobs. Also how do you argue that "they don't have any real meaningful agency or autonomy" and then say if the job is done right, we shouldn't see the difference in generation by human or machine? Do these points not entirely contrast?

    6. Traditional A.I. is generative. It can predict the next word. But it doesn’t know what the inherent meanings of those words are.

      Large argument support here-> breaks down stereotypes hyped up by dystopian arguments and moral panic concepts

    7. You know, this idea of A.I.’s developing a kind of autonomy and an agency of their own, a sort of demonic wish to destroy humanity and turn us all into paper clips and so on

      Dramatization to minimize genuine concerns of the dystopian and risk focused perspective.

    8. nd I think, like any major technological innovation, technology can be used for good and for bad purposes, can be used by good and bad people. That’s been the case from the invention of the car to the internet,

      Connection to historical perspectives of new technology almost to comfort the public to say hasn't there always been risks? My problem with this is that this is entirely new . . . so how can you truly compare it to tech of the past when there has been nothing like it before

    9. But it also comes with risks, including manipulation, disinformation and the existential threat of it being used by bad actors.

      Immediately considering the moral panic that others associate with AI risks AKA the dystopian perspective of AI's future

    10. artificial intelligence can be used to combat hate speech

      Key argument that allows an almost utopian perspective of technology and the future of AI-> utopian perspective of course is biased given the fact that the perspective is being supported by the president of global affairs of Meta. Makes me question if other perspectives will be considered.

  3. Sep 2023
    1. The floodgates for hate and harassment are now even more open, and people who made a home there now have to decide whether they are willing to pay the psychic cost of holding on to what they’ve built.

      Interesting metaphors in the ending of the article with associations to hate and concept of paying the cost for whats been built. Does this suggest that something must be torn down in its place?

    2. ikely was not designed for you.

      Is there truly any (american built) digital space that was "designed for you" in regards to people of color? With American not being built to include people of color will there ever be a societal understanding that there is a necessity for proper integration and overall value for Black people enough to even create a digital environment "designed for" them?

    3. #SayHerName, their posts were flagged, taken down, and many were banned. And perhaps now we might add “musked” to the lexicon, since similar trends are seen on Twitter.

      An example of how censorship can be a part of inequality and limits the freedoms of women, people of color, and other groups that are not societally backed.

    4. While some attest to white flight as the reasons behind the demise of MySpace, the same can’t be said of Twitter. The birds flocked here because of Black Excellence.

      Change in cultural dynamics causing an effect in who access tech and media that was different than in the past, as Black culture moreso defines modern culture than ever before

    5. Black Twitter, which media studies scholar Meredith Clark defines as “a network of culturally connected communicators using the platform to draw attention to issues of concern to black communities.”

      Is this true power? Control? Freedom? Yes we've congregated digitally and have a "voice" in a sense, but is it enough?

    6. whether they are attempting to take the pulse of what’s cool or gain insights on social movements.

      Side note referring to the outlandish cultural appropriation from the Black community

    7. but there is a danger in conflating Twitter the technology and digital space with Twitter the people.

      Remember what the purpose of twitter was truly for, for people. If there are no people other than those with the same perspectives, it becomes boring and unnecessary and representative of whatever perspectives reside there.

    8. There’s a wide discrepancy among Black folks about whether to stay or go, as shown by the fact that we are two Black academics who have made different choices.

      Is this playing into the stereotype that Black people are a monolith? - Yes, but in the same sense is trying to describe the fact that in a society that was built against us, we turn to one another in culture oftentimes.

    9. his is how we might frame questions over whether to remain on Twitter or abandon the platform

      We remain on social media platforms assuming there are people we want to be associated with or share something with.

    1. “We should question the system that requires them to record them in the first place.”

      Addressing that "technology" or "social media" is not the issue, but the systemic racism and biases are to blame, which are also personified due to what other bigots put in place in our system

    2. It might be more productive to take actions like pushing for police reform laws or supporting political candidates whose policies you agree with.

      This point is extremely important to remind people of the systemic change that can be created, but also to eliminate the constant stereotype that all Black americans are helpless and under attack at all times, as opposed to this narrative that they have a say in reform and in political candidates and thus policies that are put in place.

    3. how often images of racist violence

      This goes against this democratic belief that the public should have access to anything/everything (though this has yet to fully be implemented) without censorship. Especially in times where people like to act as though racism is not still present, wouldn't control over what racism (if any at all) is shared vs not create issues? How do we decide which instances are shared vs not?

    4. news organizations should not show videos of people’s death without the permission of the families,

      Yes, but it is not merely just the news organizations that are sharing such videos. They will always have a way of being posted and shared through numerous media

    5. We can’t ignore the benefits of technology that let people show their points of view to the world. But we also can’t overlook the unintended consequences when life — particularly our darkest moments — is so public.

      The opposite ends of the spectrum in understanding this "balance" that is wanted

    6. choose for themselves

      This becomes now a question of whether or not there is free will in regard to who has the choice of sharing these videos of Black beatings, killings, and mistreatment rooted in racism. Who deserves that right? That free will?

    7. She said there is a long track record of Black Americans forcing awareness of racist violence, including Ida B. Wells’s accounts of lynchings, Mamie Till Mobley’s insistence on showing the public her son’s mutilated body and civil rights marchers’ beatings in Selma, Ala., in 1965.

      BIG moments that are listed here, but doesn't this prove the point that these videos and photos should be shared? Emmett Till didn't exactly get justice, so does it make a true difference in the present or past?

    8. hose videos can repeatedly re-expose crime victims, their family members and witnesses to their worst moments. And they can make it seem like Black Americans need to provide proof of racist violence to be believed.

      Negative perspective of how tech and media interact and even create abstract narratives in the circulation of videos of violence against Black americans-> cons

    9. balance those benefits against the costs

      Is there a balance of this? Is the fact that we even have access and ability to post these videos and demand justice a benefit? Or does this play into other narratives that undermine Black americans and violence against them? It doesn't seem right to not post as the justice system is already askew, so where does that leave this balance?

    10. Phones and social media have also empowered people to tell their own stories and helped bring more attention to the mistreatment of Black Americans.

      Positive perspective of how technology and media has assisted in the BLM movement and overall attention to mistreatment -> narrative of pros of tech in this case

    11. Toll of Bearing

      "Toll" and "Bearing" already are creating a narrative of heaviness associated with the article, even more unwanted weight on people's shoulders.