42 Matching Annotations
  1. Dec 2022
    1. We analyzed URLs cited in Twitter messages before and after the temporary interruption of the vaccine development on September 9, 2020 to investigate the presence of low credibility and malicious information. We show that the halt of the AstraZeneca clinical trials prompted tweets that cast doubt, fear and vaccine opposition. We discovered a strong presence of URLs from low credibility or malicious websites, as classified by independent fact-checking organizations or identified by web hosting infrastructure features. Moreover, we identified what appears to be coordinated operations to artificially promote some of these URLs hosted on malicious websites.
    1. When public health emergencies break out, social bots are often seen as the disseminator of misleading information and the instigator of public sentiment (Broniatowski et al., 2018; Shi et al., 2020). Given this research status, this study attempts to explore how social bots influence information diffusion and emotional contagion in social networks.
    1. . Furthermore, our results add to the growing body of literature documenting—at least at this historical moment—the link between extreme right-wing ideology and misinformation8,14,24 (although, of course, factors other than ideology are also associated with misinformation sharing, such as polarization25 and inattention17,37).

      Misinformation exposure and extreme right-wing ideology appear associated in this report. Others find that it is partisanship that predicts susceptibility.

    2. And finally, at the individual level, we found that estimated ideological extremity was more strongly associated with following elites who made more false or inaccurate statements among users estimated to be conservatives compared to users estimated to be liberals. These results on political asymmetries are aligned with prior work on news-based misinformation sharing

      This suggests the misinformation sharing elites may influence whether followers become more extreme. There is little incentive not to stoke outrage as it improves engagement.

    1. We find that, during the pandemic, no-vax communities became more central in the country-specificdebates and their cross-border connections strengthened, revealing a global Twitter anti-vaccinationnetwork. U.S. users are central in this network, while Russian users also become net exporters ofmisinformation during vaccination roll-out. Interestingly, we find that Twitter’s content moderationefforts, and in particular the suspension of users following the January 6th U.S. Capitol attack, had aworldwide impact in reducing misinformation spread about vaccines. These findings may help publichealth institutions and social media platforms to mitigate the spread of health-related, low-credibleinformation by revealing vulnerable online communities
    1. We applied two scenarios to compare how these regular agents behave in the Twitter network, with and without malicious agents, to study how much influence malicious agents have on the general susceptibility of the regular users. To achieve this, we implemented a belief value system to measure how impressionable an agent is when encountering misinformation and how its behavior gets affected. The results indicated similar outcomes in the two scenarios as the affected belief value changed for these regular agents, exhibiting belief in the misinformation. Although the change in belief value occurred slowly, it had a profound effect when the malicious agents were present, as many more regular agents started believing in misinformation.

    1. we found that social bots played a bridge role in diffusion in the apparent directional topic like “Wuhan Lab”. Previous research also found that social bots play some intermediary roles between elites and everyday users regarding information flow [43]. In addition, verified Twitter accounts continue to be very influential and receive more retweets, whereas social bots retweet more tweets from other users. Studies have found that verified media accounts remain more central to disseminating information during controversial political events [75]. However, occasionally, even the verified accounts—including those of well-known public figures and elected officials—sent misleading tweets. This inspired us to investigate the validity of tweets from verified accounts in subsequent research. It is also essential to rely solely on science and evidence-based conclusions and avoid opinion-based narratives in a time of geopolitical conflict marked by hidden agendas, disinformation, and manipulation [76].
    2. We analyzed and visualized Twitter data during the prevalence of the Wuhan lab leak theory and discovered that 29% of the accounts participating in the discussion were social bots. We found evidence that social bots play an essential mediating role in communication networks. Although human accounts have a more direct influence on the information diffusion network, social bots have a more indirect influence. Unverified social bot accounts retweet more, and through multiple levels of diffusion, humans are vulnerable to messages manipulated by bots, driving the spread of unverified messages across social media. These findings show that limiting the use of social bots might be an effective method to minimize the spread of conspiracy theories and hate speech online.
    1. The real question isn’t whether platforms like Twitter and Facebook are public squares (because they aren’t), but whether they should be. Should everyone have a right to access these platforms and speak through them the way we all have a right to stand on a soap box downtown and speak through a megaphone? It’s a more complicated ask than we realize—certainly more complicated than those (including Elon Musk himself) who seem to think merely declaring Twitter a public square is sufficient.
  2. Aug 2022
  3. Jun 2022
    1. Ps) I am trying to post daily content like this on LinkedIn using my Slip-Box as the content generator (the same is posted on Twitter, but LinkedIn is easier to read), so if you want to see more like this, feel free to look me up on LinkedIn or Twitter.

      Explicit example of someone using a zettelkasten to develop ideas and create content for distribution online and within social media.


  4. Apr 2022
    1. Mike Caulfield. (2021, March 10). One of the drivers of Twitter daily topics is that topics must be participatory to trend, which means one must be able to form a firm opinion on a given subject in the absence of previous knowledge. And, it turns out, this is a bit of a flaw. [Tweet]. @holden. https://twitter.com/holden/status/1369551099489779714

  5. Jan 2022
  6. Oct 2021
  7. Sep 2021
  8. Aug 2021
  9. Jun 2021
    1. Professor, interested in plagues, and politics. Re-locking my twitter acct when is 70% fully vaccinated.

      Example of a professor/research who has apparently made his Tweets public, but intends to re-lock them majority of threat is over.

  10. May 2021
    1. Darren Dahly. (2021, February 24). @SciBeh One thought is that we generally don’t ‘press’ strangers or even colleagues in face to face conversations, and when we do, it’s usually perceived as pretty aggressive. Not sure why anyone would expect it to work better on twitter. Https://t.co/r94i22mP9Q [Tweet]. @statsepi. https://twitter.com/statsepi/status/1364482411803906048

  11. Apr 2021
  12. Mar 2021
  13. Feb 2021
  14. Oct 2020
  15. Sep 2020
    1. ReconfigBehSci on Twitter: “having spent a few days looking at ‘debate’ about COVID policy on lay twitter (not the conspiracy stuff, just the ‘we should all be Sweden’ discussions), the single most jarring (and worrying) thing I noticed is that posters seem completely undeterred by self contradiction 1/3” / Twitter. (n.d.). Retrieved September 23, 2020, from https://twitter.com/SciBeh/status/1308340430170456064

  16. Jul 2020
  17. Jun 2020
  18. May 2020
    1. Part of the problem of social media is that there is no equivalent to the scientific glassblowers’ sign, or the woodworker’s open door, or Dafna and Jesse’s sandwich boards. On the internet, if you stop speaking: you disappear. And, by corollary: on the internet, you only notice the people who are speaking nonstop.

      This quote comes from a larger piece by Robin Sloan. (I don't know who that is though)

      The problem with social media is that the equivalent to working with the garage door open (working in public) is repeatedly talking in public about what you're doing.

      One problem with this is that you need to choose what you want to talk about, and say it. This emphasizes whatever you select, not what would catch a passerby's eye.

      The other problem is that you become more visible by the more you talk. Conversely, when you stop talking, you become invisible.

  19. Oct 2017
    1. Anti-vaccinations groups, for example, have reliedon viral videos to sell the panic of vaccination side-effects

      Unfortunately, this is very true. We can say the same about fake news. Such practices can contribute to hurting the validity of the overall data. The Twitter data is not collected with systematic investigation or systematic collection methods. This data collection method heavily relies on “public opinion”. I do think that if one wants to find general public sentiment or general public opinion, this is a great way to do it.

  20. Mar 2016
    1. Where academic Twitter once seemed quietly parochial and collegial almost to the point of excess, it is now thrust into the messy, contested business of being truly open to the public.

      is being in the public the problem, or is it the change of the tone or format of discourse?

      fully public honest but still civil discussions aiming at making a case, creating more awareness, finding solutions, or trying to understand, clarify, show genuine interest .... is better than a public fight .. right? or am I misunderstanding this?