117 Matching Annotations
  1. Oct 2021
    1. Canada is not an accident or a work in progress or a thought experiment. I mean that Canada is a scam — a pyramid scheme, a ruse, a heist. Canada is a front. And it’s a front for a massive network of resource extraction companies, oil barons, and mining magnates.

      Extraction Empire

      Globally, more than 75% of prospecting and mining companies on the planet are based in Canada. Seemingly impossible to conceive, the scale of these statistics naturally extends the logic of Canada’s historical legacy as state, nation, and now, as global resource empire.

      Canada’s Indian Reserve System served, officially, as a strategy of Indigenous apartheid (preceding South African apartheid) and unofficially, as a policy of Indigenous genocide (preceding the Nazi concentration camps of World War II).

  2. Sep 2021
    1. Kevin Marks talks about the bridging of new people into one's in-group by Twitter's retweet functionality from a positive perspective.

      He doesn't foresee the deleterious effects of algorithms for engagement doing just the opposite of increasing the volume of noise based on one's in-group hating and interacting with "bad" content in the other direction. Some of these effects may also be bad from a slow brainwashing perspective if not protected for.

  3. Aug 2021
  4. Jul 2021
  5. Jun 2021
  6. May 2021
    1. Ashish K. Jha, MD, MPH. (2020, October 27). President keeps saying we have more cases because we are testing more This is not true But wait, how do we know? Doesn’t more testing lead to identifying more cases? Actually, it does So we look at other data to know if its just about testing or underlying infections Thread [Tweet]. @ashishkjha. https://twitter.com/ashishkjha/status/1321118890513080322

  7. Apr 2021
  8. Mar 2021
  9. Feb 2021
    1. However, banning him opens a very dangerous precedent, making the US more like a dictatorship... more like China. Also it's not effective. Those who were silenced will only have more motivation, and the risk of terrorism is greatly increased. The people must decide what is true. Not big companies. Individuals must be able to express their beliefs. Bot accounts must be banned, but real individuals must not. If you think a group of people is a bunch of idiots who believe fake news, then, tough, that's democracy for you. Maybe it means that your government is not investing enough in education and welfare to properly educate and give hope to those people.
    1. A 2018 MIT study co-authored by the company’s own former chief media scientist found that false news stories on Twitter are 70 percent more likely to be retweeted than true stories and spread across the network at least six times faster.
  10. Oct 2020
    1. People who study online disinformation generally look at three criteria to assess whether a given page, account cluster, or channel is manipulative. First is account authenticity: Do the accounts accurately reflect a human identity or collection of behaviors that indicates they are authentic, even if anonymous? Second is the narrative distribution pattern: Does the message distribution appear organic and behave in the way humans interact and spread ideas? Or does the scale, timing, or volume appear coordinated and manufactured? Third, source integrity: Do the sites and domains in question have a reputation for integrity, or are they of dubious quality? This last criteria is the most prone to controversy, and the most difficult to get right.
    1. If everyone would subscribe to such a system and create good metadata for the purposes of describing their goods, services and information, it would be a trivial matter to search the Internet for highly qualified, context-sensitive results: a fan could find all the downloadable music in a given genre, a manufacturer could efficiently discover suppliers, travelers could easily choose a hotel room for an upcoming trip. A world of exhaustive, reliable metadata would be a utopia. It's also a pipe-dream, founded on self-delusion, nerd hubris and hysterically inflated market opportunities.

      Apparently this also now applies to politics and democracy too.

    1. Christian Nestell Bovee often receives credit for the quote. “Kindness: a language which the dumb can speak and the deaf can understand,” he wrote in his 1857 book “Thoughts, Feelings, and Fancies.”
    1. “every courageous and incisive measure to solve internal problems of our own society, to improve self-confidence, discipline, morale and community spirit of our own people, is a diplomatic victory over Moscow worth a thousand diplomatic notes and joint communiqués. If we cannot abandon fatalism and indifference in the face of deficiencies of our own society, Moscow will profit.”

      Perhaps the best defense against active measures is a little bit of activism of our own

    1. Last year, a paper published in Science found that people over the age of 65 were seven times as likely as those ages 30 to 44, the youngest group included in that survey, to share articles from websites that spread false information during the 2016 presidential campaign.

      Why are the results different now?

  11. Sep 2020
    1. The reason I’m optimistic is not that I think QAnon will disappear in a year but that something like QAnon is proof that people care and people like being involved in pursuit of truth. In QAnon that care and pursuit are dangerously twisted. But it gives people who feel unwelcome in lots of places a sense of purpose. You can make projects and build community that harnesses that positively. The same way bad actors can look at QAnon and find a playbook, so can good actors. We can find similar ways to motivate alienated people in a more constructive way. At least I hope so.
  12. Aug 2020
  13. Jul 2020
  14. Jun 2020
    1. If you eventually do manage to find the information you need, kudos. You’re obviously very committed to learn more. But wasn’t the whole “we need context” meme prompted by the acknowledgement that most readers get confused and quit way before that stage?
  15. May 2020
  16. Apr 2020
  17. Mar 2020
    1. Microtargeted ads are also, as we now know all too well, a pre-greased electronic conduit for attacks on democracy and society — enabling the spread of malicious disinformation.
  18. Nov 2019
    1. Disinformation in Contemporary U.S. ForeignPolicy: Impacts and Ethics in an Era of Fake News,Social Media, and Artificial Intelligence

      The authors examine the implications of fake news (aka disinformation campaigns). Before we start reading the article, I would like you to go out into the internet (preferably the reliable and credible sources on the net) and find more about American disinformation campaigns abroad. Please share the cases you found here.

    Tags

    Annotators

    1. Disinformation in Contemporary U.S. Foreign Policy: Impacts and Ethics in an Era of Fake News, Social Media, and Artificial Intelligence

      The authors examine the implications of fake news (aka disinformation campaigns). Before we start reading the article, I would like you to go out into the internet (preferably the reliable and credible sources on the net) and find more about American disinformation campaigns abroad. Please share the cases you found here.

  19. Oct 2019
  20. Jun 2019
    1. In Trump’s first TV ad of the presidential primary in 2015, he used an image of a mass of immigrants; fact-checkers revealed the picture was in fact taken in Morocco.

      Yet another example of Trump anchoring himself in lies and disinformation.

  21. Feb 2019
    1. global communities

      This ties in to the "ethical responsibilities" bullet below, but I think we've largely failed in this regard. I don't think of it as perhaps a failure, but we were a bit naive about the purpose and promise of tech use. I think the online social spaces have become a warzone, and these have been coopted by various groups. We need to do a better job educating, advocating, and empowering individuals to survive in these spaces.

  22. Oct 2018
  23. Sep 2018
    1. How can we get back to that common ground? We need new mechanisms—suited to the digital age—that allow for a shared understanding of facts and that focus our collective attention on the most important problems.
    2. Deluged by apparent facts, arguments and counterarguments, our brains resort to the most obvious filter, the easiest cognitive shortcut for a social animal: We look to our peers, see what they believe and cheer along. As a result, open and participatory speech has turned into its opposite. Important voices are silenced by mobs of trolls using open platforms to hurl abuse and threats. Bogus news shared from one friend or follower to the next becomes received wisdom. Crucial pieces of information drown in so much irrelevance that they are lost. If books were burned in the street, we would be alarmed. Now, we are simply exhausted.
    3. For the longest time, we thought that as speech became more democratized, democracy itself would flourish. As more and more people could broadcast their words and opinions, there would be an ever-fiercer battle of ideas—with truth emerging as the winner, stronger from the fight. But in 2018, it is increasingly clear that more speech can in fact threaten democracy. The glut of information we now face, made possible by digital tools and social media platforms, can bury what is true, greatly elevate and amplify misinformation and distract from what is important.
    4. But in the digital age, when speech can exist mostly unfettered, the big threat to truth looks very different. It’s not just censorship, but an avalanche of undistinguished speech—some true, some false, some fake, some important, some trivial, much of it out-of-context, all burying us.
  24. Aug 2018
    1. The first of the two maps in the GIF image below shows the US political spectrum on the eve of the 2016 election. The second map highlights the followers of a 30-something American woman called Jenna Abrams, a following gained with her viral tweets about slavery, segregation, Donald Trump, and Kim Kardashian. Her far-right views endeared her to conservatives, and her entertaining shock tactics won her attention from several mainstream media outlets and got her into public spats with prominent people on Twitter, including a former US ambassador to Russia. Her following in the right-wing Twittersphere enabled her to influence the broader political conversation. In reality, she was one of many fake personas created by the infamous St. Petersburg troll farm known as the Internet Research Agency.
    2. Instead of trying to force their messages into the mainstream, these adversaries target polarized communities and “embed” fake accounts within them. The false personas engage with real people in those communities to build credibility. Once their influence has been established, they can introduce new viewpoints and amplify divisive and inflammatory narratives that are already circulating. It’s the digital equivalent of moving to an isolated and tight-knit community, using its own language quirks and catering to its obsessions, running for mayor, and then using that position to influence national politics.
    3. However, as the following diagrams will show, the middle is a lot weaker than it looks, and this makes public discourse vulnerable both to extremists at home and to manipulation by outside actors such as Russia.
    1. Most Americans pay at least a little attention to current events, but they differ enormously in where they turn to get their news and which stories they pay attention to. To get a better sense of how a busy news cycle played out in homes across the country, we repeated an experiment, teaming up with YouGov to ask 1,000 people nationwide to describe their news consumption and respond to a simple prompt: “In your own words, please describe what you would say happened in the news on Tuesday.”
  25. Jul 2018
    1. "The internet has become the main threat — a sphere that isn't controlled by the Kremlin," said Pavel Chikov, a member of Russia's presidential human rights council. "That's why they're going after it. Its very existence as we know it is being undermined by these measures."
    2. Gatov, who is the former head of Russia's state newswire's media analytics laboratory, told BuzzFeed the documents were part of long-term Kremlin plans to swamp the internet with comments. "Armies of bots were ready to participate in media wars, and the question was only how to think their work through," he said. "Someone sold the thought that Western media, which specifically have to align their interests with their audience, won't be able to ignore saturated pro-Russian campaigns and will have to change the tone of their Russia coverage to placate their angry readers."
    3. "There's no paradox here. It's two sides of the same coin," Igor Ashmanov, a Russian internet entrepreneur known for his pro-government views, told BuzzFeed. "The Kremlin is weeding out the informational field and sowing it with cultured plants. You can see what will happen if they don't clear it out from the gruesome example of Ukraine."
    4. The trolls appear to have taken pains to learn the sites' different commenting systems. A report on initial efforts to post comments discusses the types of profanity and abuse that are allowed on some sites, but not others. "Direct offense of Americans as a race are not published ('Your nation is a nation of complete idiots')," the author wrote of fringe conspiracy site WorldNetDaily, "nor are vulgar reactions to the political work of Barack Obama ('Obama did shit his pants while talking about foreign affairs, how you can feel yourself psychologically comfortable with pants full of shit?')." Another suggested creating "up to 100" fake accounts on the Huffington Post to master the site's complicated commenting system.
    5. According to the documents, which are attached to several hundred emails sent to the project's leader, Igor Osadchy, the effort was launched in April and is led by a firm called the Internet Research Agency. It's based in a Saint Petersburg suburb, and the documents say it employs hundreds of people across Russia who promote Putin in comments on Russian blogs.
    6. The documents show instructions provided to the commenters that detail the workload expected of them. On an average working day, the Russians are to post on news articles 50 times. Each blogger is to maintain six Facebook accounts publishing at least three posts a day and discussing the news in groups at least twice a day. By the end of the first month, they are expected to have won 500 subscribers and get at least five posts on each item a day. On Twitter, the bloggers are expected to manage 10 accounts with up to 2,000 followers and tweet 50 times a day.
    7. Russia's campaign to shape international opinion around its invasion of Ukraine has extended to recruiting and training a new cadre of online trolls that have been deployed to spread the Kremlin's message on the comments section of top American websites.Plans attached to emails leaked by a mysterious Russian hacker collective show IT managers reporting on a new ideological front against the West in the comments sections of Fox News, Huffington Post, The Blaze, Politico, and WorldNetDaily.The bizarre hive of social media activity appears to be part of a two-pronged Kremlin campaign to claim control over the internet, launching a million-dollar army of trolls to mold American public opinion as it cracks down on internet freedom at home.
    1. creating a new international news operation called Sputnik to “provide an alternative viewpoint on world events.” More and more, though, the Kremlin is manipulating the information sphere in more insidious ways.
    1. The New Yorker’s Sasha Frere-Jones called Twitter a “self-cleaning oven,” suggesting that false information could be flagged and self-corrected almost immediately. We no longer had to wait 24 hours for a newspaper to issue a correction.
    1. We’ve built an information ecosystem where information can fly through social networks (both technical and personal). Folks keep looking to the architects of technical networks to solve the problem. I’m confident that these companies can do a lot to curb some of the groups who have capitalized on what’s happening to seek financial gain. But the battles over ideology and attention are going to be far trickier. What’s at stake isn’t “fake news.” What’s at stake is the increasing capacity of those committed to a form of isolationist and hate-driven tribalism that has been around for a very long time. They have evolved with the information landscape, becoming sophisticated in leveraging whatever tools are available to achieve power, status, and attention. And those seeking a progressive and inclusive agenda, those seeking to combat tribalism to form a more perfect union —  they haven’t kept up.
    1. Dissemination MechanismsFinally, we need to think about how this content is being disseminated. Some of it is being shared unwittingly by people on social media, clicking retweet without checking. Some of it is being amplified by journalists who are now under more pressure than ever to try and make sense and accurately report information emerging on the social web in real time. Some of it is being pushed out by loosely connected groups who are deliberately attempting to influence public opinion, and some of it is being disseminated as part of sophisticated disinformation campaigns, through bot networks and troll factories.
    2. When messaging is coordinated and consistent, it easily fools our brains, already exhausted and increasingly reliant on heuristics (simple psychological shortcuts) due to the overwhelming amount of information flashing before our eyes every day. When we see multiple messages about the same topic, our brains use that as a short-cut to credibility. It must be true we say — I’ve seen that same claim several times today.
    3. I saw Eliot Higgins present in Paris in early January, and he listed four ‘Ps’ which helped explain the different motivations. I’ve been thinking about these a great deal and using Eliot’s original list have identified four additional motivations for the creation of this type of content: Poor Journalism, Parody, to Provoke or ‘Punk’, Passion, Partisanship, Profit, Political Influence or Power, and Propaganda.This is a work in progress but once you start breaking these categories down and mapping them against one another you begin to see distinct patterns in terms of the types of content created for specific purposes.
    4. Back in November, I wrote about the different types of problematic information I saw circulate during the US election. Since then, I’ve been trying to refine a typology (and thank you to Global Voices for helping me to develop my definitions even further). I would argue there are seven distinct types of problematic content that sit within our information ecosystem. They sit on a scale, one that loosely measures the intent to deceive.