174 Matching Annotations
  1. Nov 2022
    1. Under President Joe Biden, the shifting focus on disinformation has continued. In January 2021, CISA replaced the Countering Foreign Influence Task force with the “Misinformation, Disinformation and Malinformation” team, which was created “to promote more flexibility to focus on general MDM.” By now, the scope of the effort had expanded beyond disinformation produced by foreign governments to include domestic versions. The MDM team, according to one CISA official quoted in the IG report, “counters all types of disinformation, to be responsive to current events.” Jen Easterly, Biden’s appointed director of CISA, swiftly made it clear that she would continue to shift resources in the agency to combat the spread of dangerous forms of information on social media.

      MDM == Misinformation, Disinformation, and Malinformation.

      These definitions from earlier in the article: * misinformation (false information spread unintentionally) * disinformation (false information spread intentionally) * malinformation (factual information shared, typically out of context, with harmful intent)

  2. Oct 2022
    1. “I think we were so happy to develop all this critique because we were so sure of the authority of science,” Latour reflected this spring. “And that the authority of science would be shared because there was a common world.”

      This is crucial. Latour was constructing science based on the belief of its authority - not deconstructing science. And the point about the common world, as inherently connected to the authority of science, is great.

    1. What Labiste described as a “well-oiled operation” has been years in the making. The Marcos Jr campaign has utilised Facebook pages and groups, YouTube channels and TikTok videos to reach out to Filipino voters, most of whom use the internet to get their political news. A whistleblower at the British data analytics firm, Cambridge Analytica, which assisted with the presidential campaign of former US President Donald Trump, also said Marcos Jr sought help to rebrand the family’s image in 2016, a claim he denied.

      Perhaps among those that can be considered mistakes by the election campaigns that were conducted in opposition to the Marcos presidential candidacy was the inability to comprehend the impact made by disinformation materials generated over social media. There was an awareness that these had to be constantly called out and corrected. However the efforts conducted to accomplish it failed to understand the population to which the disinformation campaign was directed. Corrections to wrong facts, and the presenting of the truth were only understood by those who corrected them. Moreover, by the time the campaign against wrong information started in earnest, the mediums for disinformation have already been well established and have taken root among those who easily believe it as the truth. It explains why plenty of the efforts done to fact check information has fallen on deaf ears, and in some cases have even pushed those who believe disinformation to hold onto it even more.

    1. Edgerly noted that disinformation spreads through two ways: The use of technology and human nature.Click-based advertising, news aggregation, the process of viral spreading and the ease of creating and altering websites are factors considered under technology.“Facebook and Google prioritize giving people what they ‘want’ to see; advertising revenue (are) based on clicks, not quality,” Edgerly said.She noted that people have the tendency to share news and website links without even reading its content, only its headline. According to her, this perpetuates a phenomenon of viral spreading or easy sharing.There is also the case of human nature involved, where people are “most likely to believe” information that supports their identities and viewpoints, Edgerly cited.“Vivid, emotional information grabs attention (and) leads to more responses (such as) likes, comments, shares. Negative information grabs more attention than (the) positive and is better remembered,” she said.Edgerly added that people tend to believe in information that they see on a regular basis and those shared by their immediate families and friends.

      Spreading misinformation and disinformation is really easy in this day and age because of how accessible information is and how much of it there is on the web. This is explained precisely by Edgerly. Noted in this part of the article, there is a business for the spread of disinformation, particularly in our country. There are people who pay what we call online trolls, to spread disinformation and capitalize on how “chronically online” Filipinos are, among many other factors (i.e., most Filipinos’ information illiteracy due to poverty and lack of educational attainment, how easy it is to interact with content we see online, regardless of its authenticity, etc.). Disinformation also leads to misinformation through word-of-mouth. As stated by Edgerly in this article, “people tend to believe in information… shared by their immediate families and friends”; because of people’s human nature to trust the information shared by their loved ones, if one is not information literate, they will not question their newly received information. Lastly, it most certainly does not help that social media algorithms nowadays rely on what users interact with; the more that a user interacts with a certain information, the more that social media platforms will feed them that information. It does not help because not all social media websites have fact checkers and users can freely spread disinformation if they chose to.

    1. "In 2013, we spread fake news in one of the provinces I was handling," he says, describing how he set up his client's opponent. "We got the top politician's cell phone number and photo-shopped it, then sent out a text message pretending to be him, saying he was looking for a mistress. Eventually, my client won."

      This statement from a man who claims to work for politicians as an internet troll and propagator of fake news was really striking, because it shows how fabricating something out of the blue can have a profound impact in the elections--something that is supposed to be a democratic process. Now more than ever, mudslinging in popular information spaces like social media can easily sway public opinion (or confirm it). We have seen this during the election season, wherein Leni Robredo bore the brunt of outrageous rumors; one rumor I remember well was that Leni apparently married an NPA member before and had a kid with him. It is tragic that misinformation and disinformation is not just a mere phenomenon anymore, but a fully blown industry. It has a tight clutch on the decisions people make for the country, while also deeply affecting their values and beliefs.

    1. Trolls, in this context, are humans who hold accounts on social media platforms, more or less for one purpose: To generate comments that argue with people, insult and name-call other users and public figures, try to undermine the credibility of ideas they don’t like, and to intimidate individuals who post those ideas. And they support and advocate for fake news stories that they’re ideologically aligned with. They’re often pretty nasty in their comments. And that gets other, normal users, to be nasty, too.

      Not only programmed accounts are created but also troll accounts that propagate disinformation and spread fake news with the intent to cause havoc on every people. In short, once they start with a malicious comment some people will engage with the said comment which leads to more rage comments and disagreements towards each other. That is what they do, they trigger people to engage in their comments so that they can be spread more and produce more fake news. These troll accounts usually are prominent during elections, like in the Philippines some speculates that some of the candidates have made troll farms just to spread fake news all over social media in which some people engage on.

    2. So, bots are computer algorithms (set of logic steps to complete a specific task) that work in online social network sites to execute tasks autonomously and repetitively. They simulate the behavior of human beings in a social network, interacting with other users, and sharing information and messages [1]–[3]. Because of the algorithms behind bots’ logic, bots can learn from reaction patterns how to respond to certain situations. That is, they possess artificial intelligence (AI). 

      In all honesty, since I don't usually dwell on technology, coding, and stuff. I thought when you say "Bot" it is controlled by another user like a legit person, never knew that it was programmed and created to learn the usual patterns of posting of some people may be it on Twitter, Facebook, and other social media platforms. I think it is important to properly understand how "Bots" work to avoid misinformation and disinformation most importantly during this time of prominent social media use.

  3. Sep 2022
    1. the court upheld a preposterous Texas law stating that online platforms with more than 50 million monthly active users in the United States no longer have First Amendment rights regarding their editorial decisions. Put another way, the law tells big social-media companies that they can’t moderate the content on their platforms.
    1. Many of you are already aware of recent changes that the Foundation has made to its NDA policy. These changes have been discussed on Meta, and I won’t reiterate all of our disclosures there,[2] but I will briefly summarize that due to credible information of threat, the Foundation has modified its approach to accepting “non-disclosure agreements” from individuals. The security risk relates to information about infiltration of Wikimedia systems, including positions with access to personally identifiable information and elected bodies of influence. We could not pre-announce this action, even to our most trusted community partner groups (like the stewards), without fear of triggering the risk to which we’d been alerted. We restricted access to these tools immediately in the jurisdictions of concern, while working with impacted users to determine if the risk applied to them.
  4. Aug 2022
    1. Kahne and Bowyer (2017) exposed thousands of young people in California tosome true messages and some false ones, similar to the memes they may see on social media
    2. Many U.S.educators believe that increasing political polarization combine with the hazards ofmisinformation and disinformation in ways that underscore the need for learners to acquire theknowledge and skills required to navigate a changing media landscape (Hamilton et al. 2020a)
  5. Jul 2022
    1. An

      Find common ground. Clear away the kindling. Provide context...don't de-platform.

    2. You have three options:Continue fighting fires with hordes of firefighters (in this analogy, fact-checkers).Focus on the arsonists (the people spreading the misinformation) by alerting the town they're the ones starting the fire (banning or labeling them).Clear the kindling and dry brush (teach people to spot lies, think critically, and ask questions).Right now, we do a lot of #1. We do a little bit of #2. We do almost none of #3, which is probably the most important and the most difficult. I’d propose three strategies for addressing misinformation by teaching people to ask questions and spot lies. 
    3. Simply put, the threat of "misinformation" being spread at scale is not novel or unique to our generation—and trying to slow the advances of information sharing is futile and counter-productive.
    4. It’s worth reiterating precisely why: The very premise of science is to create a hypothesis, put that hypothesis up to scrutiny, pit good ideas against bad ones, and continue to test what you think you know over and over and over. That’s how we discovered tectonic plates and germs and key features of the universe. And oftentimes, it’s how we learn from great public experiments, like realizing that maybe paper or reusable bags are preferable to plastic.

      develop a hypothesis, and pit different ideas against one another

    5. All of these approaches tend to be built on an assumption that misinformation is something that can and should be censored. On the contrary, misinformation is a troubling but necessary part of our political discourse. Attempts to eliminate it carry far greater risks than attempts to navigate it, and trying to weed out what some committee or authority considers "misinformation" would almost certainly restrict our continued understanding of the world around us.
    6. To start, it is worth defining “misinformation”: Simply put, misinformation is “incorrect or misleading information.” This is slightly different from “disinformation,” which is “false information deliberately and often covertly spread (by the planting of rumors) in order to influence public opinion or obscure the truth.” The notable difference is that disinformation is always deliberate.
  6. Apr 2022
    1. Given the difficulty of regulating every online post, especially in a country that protects most forms of speech, it seems far more prudent to focus most of our efforts on building an educated and resilient public that can spot and then ignore disinformation campaigns

      On the need for disinformation educations

      ...but what is the difference "between what’s a purposeful attempt to mislead the public and what’s being called disinformation because of a genuine difference of opinion"

    1. Nick Sawyer, MD, MBA, FACEP [@NickSawyerMD]. (2022, January 3). The anti-vaccine community created a manipulated version of VARES that misrepresents the VAERS data. #disinformationdoctors use this data to falsely claim that vaccines CAUSE bad outcomes, when the relationship is only CORRELATED. Watch this explainer: Https://youtu.be/VMUQSMFGBDo https://t.co/ruRY6E6blB [Tweet]. Twitter. https://twitter.com/NickSawyerMD/status/1477806470192197633

    1. Ashish K. Jha, MD, MPH. (2020, October 27). President keeps saying we have more cases because we are testing more This is not true But wait, how do we know? Doesn’t more testing lead to identifying more cases? Actually, it does So we look at other data to know if its just about testing or underlying infections Thread [Tweet]. @ashishkjha. https://twitter.com/ashishkjha/status/1321118890513080322

    1. Kit Yates. (2021, September 27). This is absolutely despicable. This bogus “consent form” is being sent to schools and some are unquestioningly sending it out with the real consent form when arranging for vaccination their pupils. Please spread the message and warn other parents to ignore this disinformation. Https://t.co/lHUvraA6Ez [Tweet]. @Kit_Yates_Maths. https://twitter.com/Kit_Yates_Maths/status/1442571448112013319

    1. (20) James 💙 Neill—😷 🇪🇺🇮🇪🇬🇧🔶 on Twitter: “The domain sending that fake NHS vaccine consent hoax form to schools has been suspended. Excellent work by @martincampbell2 and fast co-operation by @kualo 👍 FYI @fascinatorfun @Kit_Yates_Maths @dgurdasani1 @AThankless https://t.co/pbAgNfkbEs” / Twitter. (n.d.). Retrieved November 22, 2021, from https://twitter.com/jneill/status/1442784873014566913

    1. Weinberg’s tweet announcing the change generated thousands of comments, many of them from conservative-leaning users who were furious that the company they turned to in order to get away from perceived Big Tech censorship was now the one doing the censoring. It didn’t help that the content DuckDuckGo was demoting and calling disinformation was Russian state media, whose side some in the right-wing contingent of DuckDuckGo’s users were firmly on.

      There is an odd sort of self-selected information bubble here. DuckDuckGo promoted itself as privacy-aware, not unfiltered. On their Sources page, they talk about where they get content and how they don't sacrifice privacy to gather search results. Demoting disinformation sources in their algorithms would seem to be a good thing. Except if what you expect to see is disinformation, and then suddenly the search results don't match your expectations.

  7. Mar 2022
    1. ReconfigBehSci. (2022, February 17). @thackerpd @STWorg “carping about anti-vaxxers”? You mean constant attempts to try and save lives and end pandemic by generating, curating and promoting research data on the benefits of vaccination and/or generating, curating and promoting data that undercuts the wilful disinformation on vaxx? [Tweet]. @SciBeh. https://twitter.com/SciBeh/status/1494201269724012546

  8. Feb 2022
    1. Prof. Gavin Yamey MD MPH. (2022, January 21). His vax disinformation is at 34.55 in the video His full comments: “But people do not trust this vaccine. It’s not been through the normal trials, it’s a technology that’s not been proven as safe & effective, & now the data is coming in on the vaccine that’s showing that..” 1/3 [Tweet]. @GYamey. https://twitter.com/GYamey/status/1484668215825469445

    1. The paper did not seem to have consent from participants for: (a) Agreeing to participate in the different types of interventions (which have the potential of hurting the health of citizens and could even lead to their death.); (b) using their personal health data to publish a research paper.

      Given that the authors are government actors who can easily access millions of citizens and put them in the study groups they desire, without even telling citizens that they are in a study, I worry that this practice will be popularized by governments in Latin America and put citizens in danger.

      I also want to note that this is a new type of disinformation where government actors can post content on these repositories and given that most citizens do NOT know it is NOT peer reviewed it can help the government actors to validate their public policy. The research paper becomes political propaganda and the international repository helps to promotes the propaganda.

  9. Jan 2022
  10. Dec 2021
    1. Prof. Shane Crotty. (2021, November 2). Wow. COVID vaccine misinformation continues to be soooo horrible. This is incredible widespread and ABSOLUTELY made up. (Just like the insanity of implantable chips they continue to claim over and over) These fabrications are so damaging to the health of Americans. [Tweet]. @profshanecrotty. https://twitter.com/profshanecrotty/status/1455540502955241489

  11. Nov 2021
  12. Oct 2021
    1. Canada is not an accident or a work in progress or a thought experiment. I mean that Canada is a scam — a pyramid scheme, a ruse, a heist. Canada is a front. And it’s a front for a massive network of resource extraction companies, oil barons, and mining magnates.

      Extraction Empire

      Globally, more than 75% of prospecting and mining companies on the planet are based in Canada. Seemingly impossible to conceive, the scale of these statistics naturally extends the logic of Canada’s historical legacy as state, nation, and now, as global resource empire.

      Canada’s Indian Reserve System served, officially, as a strategy of Indigenous apartheid (preceding South African apartheid) and unofficially, as a policy of Indigenous genocide (preceding the Nazi concentration camps of World War II).

  13. Sep 2021
    1. Kevin Marks talks about the bridging of new people into one's in-group by Twitter's retweet functionality from a positive perspective.

      He doesn't foresee the deleterious effects of algorithms for engagement doing just the opposite of increasing the volume of noise based on one's in-group hating and interacting with "bad" content in the other direction. Some of these effects may also be bad from a slow brainwashing perspective if not protected for.

  14. Aug 2021
  15. Jul 2021
  16. Jun 2021
  17. May 2021
  18. Apr 2021
  19. Mar 2021