127 Matching Annotations
  1. Jun 2024
    1. public media 1.0 wasaccepted as important but rarely loved

      It is seen as a necessity for making the invisible visible, but the ramifications of world wide attention of typically invisible things is often unknown and unprecedented

  2. Feb 2023
    1. “It makes me feel like I need a disclaimer because I feel like it makes you seem unprofessional to have these weirdly spelled words in your captions,” she said, “especially for content that's supposed to be serious and medically inclined.”

      Where's the balance for professionalism with respect to dodging the algorithmic filters for serious health-related conversations online?

      link to: https://hypothes.is/a/uBq9HKqWEe22Jp_rjJ5tjQ

  3. Dec 2022
    1. . Furthermore, our results add to the growing body of literature documenting—at least at this historical moment—the link between extreme right-wing ideology and misinformation8,14,24 (although, of course, factors other than ideology are also associated with misinformation sharing, such as polarization25 and inattention17,37).

      Misinformation exposure and extreme right-wing ideology appear associated in this report. Others find that it is partisanship that predicts susceptibility.

    2. And finally, at the individual level, we found that estimated ideological extremity was more strongly associated with following elites who made more false or inaccurate statements among users estimated to be conservatives compared to users estimated to be liberals. These results on political asymmetries are aligned with prior work on news-based misinformation sharing

      This suggests the misinformation sharing elites may influence whether followers become more extreme. There is little incentive not to stoke outrage as it improves engagement.

    1. Exposure to elite misinformation is associated with sharing news from lower-quality outlets and with conservative estimated ideology.

      Shown is the relationship between users’ misinformation-exposure scores and (a) the quality of the news outlets they shared content from, as rated by professional fact-checkers21, (b) the quality of the news outlets they shared content from, as rated by layperson crowds21, and (c) estimated political ideology, based on the ideology of the accounts they follow10. Small dots in the background show individual observations; large dots show the average value across bins of size 0.1, with size of dots proportional to the number of observations in each bin.

    1. We applied two scenarios to compare how these regular agents behave in the Twitter network, with and without malicious agents, to study how much influence malicious agents have on the general susceptibility of the regular users. To achieve this, we implemented a belief value system to measure how impressionable an agent is when encountering misinformation and how its behavior gets affected. The results indicated similar outcomes in the two scenarios as the affected belief value changed for these regular agents, exhibiting belief in the misinformation. Although the change in belief value occurred slowly, it had a profound effect when the malicious agents were present, as many more regular agents started believing in misinformation.

    1. Therefore, although the social bot individual is “small”, it has become a “super spreader” with strategic significance. As an intelligent communication subject in the social platform, it conspired with the discourse framework in the mainstream media to form a hybrid strategy of public opinion manipulation.
    2. There were 120,118 epidemy-related tweets in this study, and 34,935 Twitter accounts were detected as bot accounts by Botometer, accounting for 29%. In all, 82,688 Twitter accounts were human, accounting for 69%; 2495 accounts had no bot score detected.In social network analysis, degree centrality is an index to judge the importance of nodes in the network. The nodes in the social network graph represent users, and the edges between nodes represent the connections between users. Based on the network structure graph, we may determine which members of a group are more influential than others. In 1979, American professor Linton C. Freeman published an article titled “Centrality in social networks conceptual clarification“, on Social Networks, formally proposing the concept of degree centrality [69]. Degree centrality denotes the number of times a central node is retweeted by other nodes (or other indicators, only retweeted are involved in this study). Specifically, the higher the degree centrality is, the more influence a node has in its network. The measure of degree centrality includes in-degree and out-degree. Betweenness centrality is an index that describes the importance of a node by the number of shortest paths through it. Nodes with high betweenness centrality are in the “structural hole” position in the network [69]. This kind of account connects the group network lacking communication and can expand the dialogue space of different people. American sociologist Ronald S. Bert put forward the theory of a “structural hole” and said that if there is no direct connection between the other actors connected by an actor in the network, then the actor occupies the “structural hole” position and can obtain social capital through “intermediary opportunities”, thus having more advantages.
    3. We analyzed and visualized Twitter data during the prevalence of the Wuhan lab leak theory and discovered that 29% of the accounts participating in the discussion were social bots. We found evidence that social bots play an essential mediating role in communication networks. Although human accounts have a more direct influence on the information diffusion network, social bots have a more indirect influence. Unverified social bot accounts retweet more, and through multiple levels of diffusion, humans are vulnerable to messages manipulated by bots, driving the spread of unverified messages across social media. These findings show that limiting the use of social bots might be an effective method to minimize the spread of conspiracy theories and hate speech online.
    1. I want to insist on an amateur internet; a garage internet; a public library internet; a kitchen table internet.

      Social media should be comprised of people from end to end. Corporate interests inserted into the process can only serve to dehumanize the system.


      Robin Sloan is in the same camp as Greg McVerry and I.

  4. Aug 2022
  5. Jun 2022
    1. send off your draft or beta orproposal for feedback. Share this Intermediate Packet with a friend,family member, colleague, or collaborator; tell them that it’s still awork-in-process and ask them to send you their thoughts on it. Thenext time you sit down to work on it again, you’ll have their input andsuggestions to add to the mix of material you’re working with.

      A major benefit of working in public is that it invites immediate feedback (hopefully positive, constructive criticism) from anyone who might be reading it including pre-built audiences, whether this is through social media or in a classroom setting utilizing discussion or social annotation methods.

      This feedback along the way may help to further find flaws in arguments, additional examples of patterns, or links to ideas one may not have considered by themselves.

      Sadly, depending on your reader's context and understanding of your work, there are the attendant dangers of context collapse which may provide or elicit the wrong sorts of feedback, not to mention general abuse.

  6. Apr 2022
    1. Mike Caulfield. (2021, March 10). One of the drivers of Twitter daily topics is that topics must be participatory to trend, which means one must be able to form a firm opinion on a given subject in the absence of previous knowledge. And, it turns out, this is a bit of a flaw. [Tweet]. @holden. https://twitter.com/holden/status/1369551099489779714

  7. Feb 2022
  8. Jan 2022
  9. Dec 2021
    1. Deepti Gurdasani. (2021, December 23). Some brief thoughts on the concerning relativism I’ve seen creeping into media, and scientific rhetoric over the past 20 months or so—The idea that things are ok because they’re better relative to a point where things got really really bad. 🧵 [Tweet]. @dgurdasani1. https://twitter.com/dgurdasani1/status/1474042179110772736

  10. Nov 2021
  11. Oct 2021
  12. Sep 2021
  13. Aug 2021
  14. Jul 2021
  15. Jun 2021
    1. Deepti Gurdasani on Twitter: “I’m still utterly stunned by yesterday’s events—Let me go over this in chronological order & why I’m shocked. - First, in the morning yesterday, we saw a ‘leaked’ report to FT which reported on @PHE_uk data that was not public at the time🧵” / Twitter. (n.d.). Retrieved June 27, 2021, from https://twitter.com/dgurdasani1/status/1396373990986375171

    1. Professor, interested in plagues, and politics. Re-locking my twitter acct when is 70% fully vaccinated.

      Example of a professor/research who has apparently made his Tweets public, but intends to re-lock them majority of threat is over.

  16. May 2021
    1. Amidst the global pandemic, this might sound not dissimilar to public health. When I decide whether to wear a mask in public, that’s partially about how much the mask will protect me from airborne droplets. But it’s also—perhaps more significantly—about protecting everyone else from me. People who refuse to wear a mask because they’re willing to risk getting Covid are often only thinking about their bodies as a thing to defend, whose sanctity depends on the strength of their individual immune system. They’re not thinking about their bodies as a thing that can also attack, that can be the conduit that kills someone else. People who are careless about their own data because they think they’ve done nothing wrong are only thinking of the harms that they might experience, not the harms that they can cause.

      What lessons might we draw from public health and epidemiology to improve our privacy lives in an online world? How might we wear social media "masks" to protect our friends and loved ones from our own posts?

    1. Darren Dahly. (2021, February 24). @SciBeh One thought is that we generally don’t ‘press’ strangers or even colleagues in face to face conversations, and when we do, it’s usually perceived as pretty aggressive. Not sure why anyone would expect it to work better on twitter. Https://t.co/r94i22mP9Q [Tweet]. @statsepi. https://twitter.com/statsepi/status/1364482411803906048

  17. Apr 2021
  18. Mar 2021
  19. Feb 2021
  20. Jan 2021
  21. Oct 2020
  22. Sep 2020
  23. Aug 2020
  24. www.nbcnews.com www.nbcnews.com