42 Matching Annotations
  1. Nov 2019
    1. Disinformation in Contemporary U.S. ForeignPolicy: Impacts and Ethics in an Era of Fake News,Social Media, and Artificial Intelligence

      The authors examine the implications of fake news (aka disinformation campaigns). Before we start reading the article, I would like you to go out into the internet (preferably the reliable and credible sources on the net) and find more about American disinformation campaigns abroad. Please share the cases you found here.

    Tags

    Annotators

    1. Disinformation in Contemporary U.S. Foreign Policy: Impacts and Ethics in an Era of Fake News, Social Media, and Artificial Intelligence

      The authors examine the implications of fake news (aka disinformation campaigns). Before we start reading the article, I would like you to go out into the internet (preferably the reliable and credible sources on the net) and find more about American disinformation campaigns abroad. Please share the cases you found here.

  2. Oct 2019
  3. Jun 2019
    1. In Trump’s first TV ad of the presidential primary in 2015, he used an image of a mass of immigrants; fact-checkers revealed the picture was in fact taken in Morocco.

      Yet another example of Trump anchoring himself in lies and disinformation.

  4. Feb 2019
    1. global communities

      This ties in to the "ethical responsibilities" bullet below, but I think we've largely failed in this regard. I don't think of it as perhaps a failure, but we were a bit naive about the purpose and promise of tech use. I think the online social spaces have become a warzone, and these have been coopted by various groups. We need to do a better job educating, advocating, and empowering individuals to survive in these spaces.

  5. Oct 2018
    1. People who study online disinformation generally look at three criteria to assess whether a given page, account cluster, or channel is manipulative. First is account authenticity: Do the accounts accurately reflect a human identity or collection of behaviors that indicates they are authentic, even if anonymous? Second is the narrative distribution pattern: Does the message distribution appear organic and behave in the way humans interact and spread ideas? Or does the scale, timing, or volume appear coordinated and manufactured? Third, source integrity: Do the sites and domains in question have a reputation for integrity, or are they of dubious quality? This last criteria is the most prone to controversy, and the most difficult to get right.
  6. Sep 2018
    1. How can we get back to that common ground? We need new mechanisms—suited to the digital age—that allow for a shared understanding of facts and that focus our collective attention on the most important problems.
    2. Deluged by apparent facts, arguments and counterarguments, our brains resort to the most obvious filter, the easiest cognitive shortcut for a social animal: We look to our peers, see what they believe and cheer along. As a result, open and participatory speech has turned into its opposite. Important voices are silenced by mobs of trolls using open platforms to hurl abuse and threats. Bogus news shared from one friend or follower to the next becomes received wisdom. Crucial pieces of information drown in so much irrelevance that they are lost. If books were burned in the street, we would be alarmed. Now, we are simply exhausted.
    3. For the longest time, we thought that as speech became more democratized, democracy itself would flourish. As more and more people could broadcast their words and opinions, there would be an ever-fiercer battle of ideas—with truth emerging as the winner, stronger from the fight. But in 2018, it is increasingly clear that more speech can in fact threaten democracy. The glut of information we now face, made possible by digital tools and social media platforms, can bury what is true, greatly elevate and amplify misinformation and distract from what is important.
    4. But in the digital age, when speech can exist mostly unfettered, the big threat to truth looks very different. It’s not just censorship, but an avalanche of undistinguished speech—some true, some false, some fake, some important, some trivial, much of it out-of-context, all burying us.
  7. Aug 2018
    1. The first of the two maps in the GIF image below shows the US political spectrum on the eve of the 2016 election. The second map highlights the followers of a 30-something American woman called Jenna Abrams, a following gained with her viral tweets about slavery, segregation, Donald Trump, and Kim Kardashian. Her far-right views endeared her to conservatives, and her entertaining shock tactics won her attention from several mainstream media outlets and got her into public spats with prominent people on Twitter, including a former US ambassador to Russia. Her following in the right-wing Twittersphere enabled her to influence the broader political conversation. In reality, she was one of many fake personas created by the infamous St. Petersburg troll farm known as the Internet Research Agency.
    2. Instead of trying to force their messages into the mainstream, these adversaries target polarized communities and “embed” fake accounts within them. The false personas engage with real people in those communities to build credibility. Once their influence has been established, they can introduce new viewpoints and amplify divisive and inflammatory narratives that are already circulating. It’s the digital equivalent of moving to an isolated and tight-knit community, using its own language quirks and catering to its obsessions, running for mayor, and then using that position to influence national politics.
    3. However, as the following diagrams will show, the middle is a lot weaker than it looks, and this makes public discourse vulnerable both to extremists at home and to manipulation by outside actors such as Russia.
    1. Most Americans pay at least a little attention to current events, but they differ enormously in where they turn to get their news and which stories they pay attention to. To get a better sense of how a busy news cycle played out in homes across the country, we repeated an experiment, teaming up with YouGov to ask 1,000 people nationwide to describe their news consumption and respond to a simple prompt: “In your own words, please describe what you would say happened in the news on Tuesday.”
  8. Jul 2018
    1. "The internet has become the main threat — a sphere that isn't controlled by the Kremlin," said Pavel Chikov, a member of Russia's presidential human rights council. "That's why they're going after it. Its very existence as we know it is being undermined by these measures."
    2. Gatov, who is the former head of Russia's state newswire's media analytics laboratory, told BuzzFeed the documents were part of long-term Kremlin plans to swamp the internet with comments. "Armies of bots were ready to participate in media wars, and the question was only how to think their work through," he said. "Someone sold the thought that Western media, which specifically have to align their interests with their audience, won't be able to ignore saturated pro-Russian campaigns and will have to change the tone of their Russia coverage to placate their angry readers."
    3. "There's no paradox here. It's two sides of the same coin," Igor Ashmanov, a Russian internet entrepreneur known for his pro-government views, told BuzzFeed. "The Kremlin is weeding out the informational field and sowing it with cultured plants. You can see what will happen if they don't clear it out from the gruesome example of Ukraine."
    4. The trolls appear to have taken pains to learn the sites' different commenting systems. A report on initial efforts to post comments discusses the types of profanity and abuse that are allowed on some sites, but not others. "Direct offense of Americans as a race are not published ('Your nation is a nation of complete idiots')," the author wrote of fringe conspiracy site WorldNetDaily, "nor are vulgar reactions to the political work of Barack Obama ('Obama did shit his pants while talking about foreign affairs, how you can feel yourself psychologically comfortable with pants full of shit?')." Another suggested creating "up to 100" fake accounts on the Huffington Post to master the site's complicated commenting system.
    5. According to the documents, which are attached to several hundred emails sent to the project's leader, Igor Osadchy, the effort was launched in April and is led by a firm called the Internet Research Agency. It's based in a Saint Petersburg suburb, and the documents say it employs hundreds of people across Russia who promote Putin in comments on Russian blogs.
    6. The documents show instructions provided to the commenters that detail the workload expected of them. On an average working day, the Russians are to post on news articles 50 times. Each blogger is to maintain six Facebook accounts publishing at least three posts a day and discussing the news in groups at least twice a day. By the end of the first month, they are expected to have won 500 subscribers and get at least five posts on each item a day. On Twitter, the bloggers are expected to manage 10 accounts with up to 2,000 followers and tweet 50 times a day.
    7. Russia's campaign to shape international opinion around its invasion of Ukraine has extended to recruiting and training a new cadre of online trolls that have been deployed to spread the Kremlin's message on the comments section of top American websites.Plans attached to emails leaked by a mysterious Russian hacker collective show IT managers reporting on a new ideological front against the West in the comments sections of Fox News, Huffington Post, The Blaze, Politico, and WorldNetDaily.The bizarre hive of social media activity appears to be part of a two-pronged Kremlin campaign to claim control over the internet, launching a million-dollar army of trolls to mold American public opinion as it cracks down on internet freedom at home.
    1. creating a new international news operation called Sputnik to “provide an alternative viewpoint on world events.” More and more, though, the Kremlin is manipulating the information sphere in more insidious ways.
    1. The New Yorker’s Sasha Frere-Jones called Twitter a “self-cleaning oven,” suggesting that false information could be flagged and self-corrected almost immediately. We no longer had to wait 24 hours for a newspaper to issue a correction.
    1. We’ve built an information ecosystem where information can fly through social networks (both technical and personal). Folks keep looking to the architects of technical networks to solve the problem. I’m confident that these companies can do a lot to curb some of the groups who have capitalized on what’s happening to seek financial gain. But the battles over ideology and attention are going to be far trickier. What’s at stake isn’t “fake news.” What’s at stake is the increasing capacity of those committed to a form of isolationist and hate-driven tribalism that has been around for a very long time. They have evolved with the information landscape, becoming sophisticated in leveraging whatever tools are available to achieve power, status, and attention. And those seeking a progressive and inclusive agenda, those seeking to combat tribalism to form a more perfect union —  they haven’t kept up.
    1. Dissemination MechanismsFinally, we need to think about how this content is being disseminated. Some of it is being shared unwittingly by people on social media, clicking retweet without checking. Some of it is being amplified by journalists who are now under more pressure than ever to try and make sense and accurately report information emerging on the social web in real time. Some of it is being pushed out by loosely connected groups who are deliberately attempting to influence public opinion, and some of it is being disseminated as part of sophisticated disinformation campaigns, through bot networks and troll factories.
    2. When messaging is coordinated and consistent, it easily fools our brains, already exhausted and increasingly reliant on heuristics (simple psychological shortcuts) due to the overwhelming amount of information flashing before our eyes every day. When we see multiple messages about the same topic, our brains use that as a short-cut to credibility. It must be true we say — I’ve seen that same claim several times today.
    3. I saw Eliot Higgins present in Paris in early January, and he listed four ‘Ps’ which helped explain the different motivations. I’ve been thinking about these a great deal and using Eliot’s original list have identified four additional motivations for the creation of this type of content: Poor Journalism, Parody, to Provoke or ‘Punk’, Passion, Partisanship, Profit, Political Influence or Power, and Propaganda.This is a work in progress but once you start breaking these categories down and mapping them against one another you begin to see distinct patterns in terms of the types of content created for specific purposes.
    4. Back in November, I wrote about the different types of problematic information I saw circulate during the US election. Since then, I’ve been trying to refine a typology (and thank you to Global Voices for helping me to develop my definitions even further). I would argue there are seven distinct types of problematic content that sit within our information ecosystem. They sit on a scale, one that loosely measures the intent to deceive.
    5. By now we’ve all agreed the term “fake news” is unhelpful, but without an alternative, we’re left awkwardly using air quotes whenever we utter the phrase. The reason we’re struggling with a replacement is because this is about more than news, it’s about the entire information ecosystem. And the term fake doesn’t begin to describe the complexity of the different types of misinformation (the inadvertent sharing of false information) and disinformation (the deliberate creation and sharing of information known to be false).
    1. For one, much of the new research centers on U.S. politics and, specifically, elections. But social networks drive conversations about many other topics such as business, education, health, and personal relationships. To battle bad online information, it would be helpful to know whether people respond to these sorts of topics differently than they respond to information about political candidates and elections. It also would be useful to know whether myths about certain subjects — for instance, a business product or education trend — are trickier to correct than others.
    2. Scholars have known for decades that people tend to search for and believe information that confirms what they already think is true. The new elements are social media and the global networks of friends who use it. People let their guard down on online platforms such as Facebook and Twitter, where friends, family members, and coworkers share photos, gossip, and a wide variety of other information. That’s one reason why people may fall for false news, as S. Shyam Sundar, a Pennsylvania State University communication professor, explains in The Conversation. Another reason: People are less skeptical of information they encounter on platforms they have personalized — through friend requests and “liked” pages, for instance — to reflect their interests and identity.
    3. Another key, potentially surprising, takeaway from that study: “In general, fake news consumption seems to be a complement to, rather than a substitute for, hard news — visits to fake news websites are highest among people who consume the most hard news and do not measurably decrease among the most politically knowledgeable individuals.”
    4. Reuters Institute for the Study of Journalism at the University of Oxford released a report showing that false news sites appear to have a limited reach in Europe. For instance, in France, where Russians are accused of trying to interfere with the most recent presidential election, most of the false news sites studied reached 1% or less of the country’s online population each month in 2017. However, when researchers looked at how people interacted with false news on Facebook — via shares and comments, for example — “a handful of false news outlets in [the] sample generated more or as many interactions as established news brands.”
    5. As false news has become a global phenomenon, scholars have responded. They’ve ramped up their efforts to understand how and why bad information spreads online — and how to stop it. In the past 18 months, they’ve flooded academic journals with new research and have raised the level of urgency. In a March 2018 article, titled “The Science of Fake News,” in the prestigious journal Science, 16 high-profile academics came together to issue a call to action, urging internet and social media platforms to work with scholars to evaluate the problem and find solutions.
  9. Nov 2017
    1. Via email, the primary investigator of COURAGE, Dr William Boden (Boston University, MA), highlighted that investigators found no subset of patients that did better with PCI vs OMT. Not those with multivessel disease and EF<50%, not those with LAD disease, and not those with nuclear studies showing moderate-severe ischemia.[7–9]
    2. espected investigators from Imperial College London have shaken the core of cardiology. The stakes could not be bigger. Millions of people have received stents for stable coronary artery disease (CAD) at a cost of billions of dollars.
    3. Coronary Stents Humbled Yet Again in Stable CAD
  10. May 2017
  11. Dec 2016
    1. Poe’s law also played a prominent role in Facebook’s fake news problem, particularly in the spread of articles written with the cynical intention of duping Trump supporters through fabrication and misinformation. Readers may have passed these articles along as gospel because they really did believe, for example, that an FBI agent investigating Hillary Clinton’s private email server died mysteriously. Or maybe they didn’t believe it but wanted to perpetuate the falsehood for a laugh, out of boredom, or simply to watch the world burn. Each motive equally possible, each equally unverifiable, and each normalizing and incentivizing the spread of outright lies.

      Both Vectors

      Fake news was spread by both people who believed it and people who thought it was funny. Interestingly, it was spread on both vectors simultaneously.

      Poe’s law also played a prominent role in Facebook’s fake news problem, particularly in the spread of articles written with the cynical intention of duping Trump supporters through fabrication and misinformation. Readers may have passed these articles along as gospel because they really did believe, for example, that an FBI agent investigating Hillary Clinton’s private email server died mysteriously. Or maybe they didn’t believe it but wanted to perpetuate the falsehood for a laugh, out of boredom, or simply to watch the world burn. Each motive equally possible, each equally unverifiable, and each normalizing and incentivizing the spread of outright lies.

      For some purposes it doesn't actually matter whether people believed it or not -- and this is where it gets interesting. The spreading of lies as hoaxes or lies as disinformation both undermine the idea of truth, and, as the author states, the "normalizing and incentivizing of outright lies.

    2. But 2016 was also marked—besieged, even—by Poe’s law, a decade-old internet adage articulated by Nathan Poe, a commentator on a creationism discussion thread. Building on the observation that “real” creationists posting to the forum were often difficult to parse from those posing as creationists, Poe’s law stipulates that online, sincere expressions of extremism are often indistinguishable from satirical expressions of extremism.

      Poe's law states that on the internet satirical expressions of extremism are not distinguishable from real expressions of extremism. A good example of this is how fake news (hoaxes) led to fake news (disinformation).

      Poe's Law is also why categorizing disinformation as disinfomation is hard. We actually don't know the intent. We just know it is not true, manufactured out of whole cloth.

    1. Russia’s increasingly sophisticated propaganda machinery — including thousands of botnets, teams of paid human “trolls,” and networks of Web sites and social-media accounts — echoed and amplified right-wing sites across the Internet as they portrayed Clinton as a criminal hiding potentially fatal health problems and preparing to hand control of the nation to a shadowy cabal of global financiers.

      http://warontherocks.com/2016/11/trolling-for-trump-how-russia-is-trying-to-destroy-our-democracy

      Another group, PropOrNot, is supposed to be releasing their study on Russian propaganda tomorrow, 25 November. [Update: PropOrNot apparently labelled so many sites as "Russian propaganda" that it is practically a piece of disinformation all by itself. Maybe they're Russian. :) http://www.newyorker.com/news/news-desk/the-propaganda-about-russian-propaganda