768 Matching Annotations
  1. May 2023
  2. Apr 2023
    1. In Notes, writers will be able to post short-form content and share ideas with each other and their readers. Like our Recommendations feature, Notes is designed to drive discovery across Substack. But while Recommendations lets writers promote publications, Notes will give them the ability to recommend almost anything—including posts, quotes, comments, images, and links.

      Substack slowly adding features and functionality to make them a full stack blogging/social platform... first long form, then short note features...

      Also pushing in on Twitter's lunch as Twitter is having issues.

  3. Mar 2023
    1. Twitter banning him for saying women should "bear responsibility" for being sexually assaulted. He has since been reinstated.

      Twitter had banned and then later reinstated Andrew Tate for saying women should "bear responsibility" for being sexually assaulted.

    1. TheSateliteCombinationCard IndexCabinetandTelephoneStand

      A fascinating combination of office furniture types in 1906!

      The Adjustable Table Company of Grand Rapids, Michigan manufactured a combination table for both telephones and index cards. It was designed as an accessory to be stood next to one's desk to accommodate a telephone at the beginning of the telephone era and also served as storage for one's card index.

      Given the broad business-based use of the card index at the time and the newness of the telephone, this piece of furniture likely was not designed as an early proto-rolodex, though it certainly could have been (and very well may have likely been) used as such in practice.


      I totally want one of these as a side table for my couch/reading chair for both storing index cards and as a temporary writing surface while reading!


      This could also be an early precursor to Twitter!

      Folks have certainly mentioned other incarnations: - annotations in books (person to self), - postcards (person to person), - the telegraph (person to person and possibly to others by personal communication or newspaper distribution)

      but this is the first version of short note user interface for both creation, storage, and distribution by means of electrical transmission (via telephone) with a bigger network (still person to person, but with potential for easy/cheap distribution to more than a single person)

  4. Feb 2023
  5. Jan 2023
  6. Dec 2022
    1. We found that while some participants were aware of bots’ primary characteristics, others provided abstract descriptions or confused bots with other phenomena. Participants also struggled to classify accounts correctly (e.g., misclassifying > 50% of accounts) and were more likely to misclassify bots than non-bots. Furthermore, we observed that perceptions of bots had a significant effect on participants’ classification accuracy. For example, participants with abstract perceptions of bots were more likely to misclassify. Informed by our findings, we discuss directions for developing user-centered interventions against bots.
    1. We analyzed URLs cited in Twitter messages before and after the temporary interruption of the vaccine development on September 9, 2020 to investigate the presence of low credibility and malicious information. We show that the halt of the AstraZeneca clinical trials prompted tweets that cast doubt, fear and vaccine opposition. We discovered a strong presence of URLs from low credibility or malicious websites, as classified by independent fact-checking organizations or identified by web hosting infrastructure features. Moreover, we identified what appears to be coordinated operations to artificially promote some of these URLs hosted on malicious websites.
    1. When public health emergencies break out, social bots are often seen as the disseminator of misleading information and the instigator of public sentiment (Broniatowski et al., 2018; Shi et al., 2020). Given this research status, this study attempts to explore how social bots influence information diffusion and emotional contagion in social networks.
    1. . Furthermore, our results add to the growing body of literature documenting—at least at this historical moment—the link between extreme right-wing ideology and misinformation8,14,24 (although, of course, factors other than ideology are also associated with misinformation sharing, such as polarization25 and inattention17,37).

      Misinformation exposure and extreme right-wing ideology appear associated in this report. Others find that it is partisanship that predicts susceptibility.

    2. And finally, at the individual level, we found that estimated ideological extremity was more strongly associated with following elites who made more false or inaccurate statements among users estimated to be conservatives compared to users estimated to be liberals. These results on political asymmetries are aligned with prior work on news-based misinformation sharing

      This suggests the misinformation sharing elites may influence whether followers become more extreme. There is little incentive not to stoke outrage as it improves engagement.

    3. We found that users who followed elites who made more false or inaccurate statements themselves shared news from lower-quality news outlets (as judged by both fact-checkers and politically-balanced crowds of laypeople), used more toxic language, and expressed more moral outrage.

      Elite mis and disinformation sharers have a negative effect on followers.

    4. In the co-share network, a cluster of websites shared more by conservatives is also shared more by users with higher misinformation exposure scores.

      Nodes represent website domains shared by at least 20 users in our dataset and edges are weighted based on common users who shared them. a Separate colors represent different clusters of websites determined using community-detection algorithms29. b The intensity of the color of each node shows the average misinformation-exposure score of users who shared the website domain (darker = higher PolitiFact score). c Nodes’ color represents the average estimated ideology of the users who shared the website domain (red: conservative, blue: liberal). d The intensity of the color of each node shows the average use of language toxicity by users who shared the website domain (darker = higher use of toxic language). e The intensity of the color of each node shows the average expression of moral outrage by users who shared the website domain (darker = higher expression of moral outrage). Nodes are positioned using directed-force layout on the weighted network.

    5. Aligned with prior work finding that people who identify as conservative consume15, believe24, and share more misinformation8,14,25, we also found a positive correlation between users’ misinformation-exposure scores and the extent to which they are estimated to be conservative ideologically (Fig. 2c; b = 0.747, 95% CI = [0.727,0.767] SE = 0.010, t (4332) = 73.855, p < 0.001), such that users estimated to be more conservative are more likely to follow the Twitter accounts of elites with higher fact-checking falsity scores. Critically, the relationship between misinformation-exposure score and quality of content shared is robust controlling for estimated ideology
    1. Notice that Twitter’s account purge significantly impacted misinformation spread worldwide: the proportion of low-credible domains in URLs retweeted from U.S. dropped from 14% to 7%. Finally, despite not having a list of low-credible domains in Russian, Russia is central in exporting potential misinformation in the vax rollout period, especially to Latin American countries. In these countries, the proportion of low-credible URLs coming from Russia increased from 1% in vax development to 18% in vax rollout periods (see Figure 8 (b), Appendix).

    2. Interestingly, the fraction of low-credible URLs coming from U.S. dropped from 74% in the vax devel-opment period to 55% in the vax rollout. This large decrease can be directly ascribed to Twitter’s moderationpolicy: 46% of cross-border retweets of U.S. users linking to low-credible websites in the vax developmentperiod came from accounts that have been suspended following the U.S. Capitol attack (see Figure 8 (a), Ap-pendix).
    3. We find that, during the pandemic, no-vax communities became more central in the country-specificdebates and their cross-border connections strengthened, revealing a global Twitter anti-vaccinationnetwork. U.S. users are central in this network, while Russian users also become net exporters ofmisinformation during vaccination roll-out. Interestingly, we find that Twitter’s content moderationefforts, and in particular the suspension of users following the January 6th U.S. Capitol attack, had aworldwide impact in reducing misinformation spread about vaccines. These findings may help publichealth institutions and social media platforms to mitigate the spread of health-related, low-credibleinformation by revealing vulnerable online communities
    1. We estimated the contribution of veri-fied accounts to sharing and amplifying links to Russian propagandaand low-credibility sources, noticing that they have a dispropor-tionate role. In particular, superspreaders of Russian propagandaare mostly accounts verified by both Facebook and Twitter, likelydue to Russian state-run outlets having associated accounts withverified status. In the case of generic low-credibility sources, a sim-ilar result applies to Facebook but not to Twitter, where we alsonotice a few superspreaders accounts that are not verified by theplatform.
    2. On Twitter, the picture is very similar in the case of Russianpropaganda, where all accounts are verified (with a few exceptions)and mostly associated with news outlets, and generate over 68%of all retweets linking to these websites (see panel a of Figure 4).For what concerns low-credibility news, there are both verified (wecan notice the presence of seanhannity) and not verified users,and only a few of them are directly associated with websites (e.g.zerohedge or Breaking911). Here the top 15 accounts generateroughly 30% of all retweets linking to low-credibility websites.
    3. Figure 5: Top 15 spreaders of Russian propaganda (a) andlow-credibility content (b) ranked by the proportion ofretweets generated over the period of observation, with re-spect to all retweets linking to websites in each group. Weindicate those that are not verified using “hatched” bars

    4. Figure 4: Top 15 spreaders of Russian propaganda (a) andlow-credibility content (b) ranked by the proportion of in-teractions generated over the period of observation, withrespect to all interactions around links to websites in eachgroup. Given the large number of verified accounts, we indi-cate those not verified using “hatched” bars.

    1. We applied two scenarios to compare how these regular agents behave in the Twitter network, with and without malicious agents, to study how much influence malicious agents have on the general susceptibility of the regular users. To achieve this, we implemented a belief value system to measure how impressionable an agent is when encountering misinformation and how its behavior gets affected. The results indicated similar outcomes in the two scenarios as the affected belief value changed for these regular agents, exhibiting belief in the misinformation. Although the change in belief value occurred slowly, it had a profound effect when the malicious agents were present, as many more regular agents started believing in misinformation.

    1. we found that social bots played a bridge role in diffusion in the apparent directional topic like “Wuhan Lab”. Previous research also found that social bots play some intermediary roles between elites and everyday users regarding information flow [43]. In addition, verified Twitter accounts continue to be very influential and receive more retweets, whereas social bots retweet more tweets from other users. Studies have found that verified media accounts remain more central to disseminating information during controversial political events [75]. However, occasionally, even the verified accounts—including those of well-known public figures and elected officials—sent misleading tweets. This inspired us to investigate the validity of tweets from verified accounts in subsequent research. It is also essential to rely solely on science and evidence-based conclusions and avoid opinion-based narratives in a time of geopolitical conflict marked by hidden agendas, disinformation, and manipulation [76].
    2. In Figure 6, the node represented by human A is a high-degree centrality account with poor discrimination ability for disinformation and rumors; it is easily affected by misinformation retweeted by social bots. At the same time, it will also refer to the opinions of other persuasive folk opinion leaders in the retweeting process. Human B represents the official institutional account, which has a high in-degree and often pushes the latest news, preventive measures, and suggestions related to COVID-19. Human C represents a human account with high media literacy, which mainly retweets information from information sources with high credibility. It has a solid ability to identify information quality and is not susceptible to the proliferation of social bots. Human D actively creates and spreads rumors and conspiracy theories and only retweets unverified messages that support his views in an attempt to expand the influence. Social bots K, M, and N also spread unverified information (rumors, conspiracy theories, and disinformation) in the communication network without fact-checking. Social bot L may be a social bot of an official agency.

    3. We analyzed and visualized Twitter data during the prevalence of the Wuhan lab leak theory and discovered that 29% of the accounts participating in the discussion were social bots. We found evidence that social bots play an essential mediating role in communication networks. Although human accounts have a more direct influence on the information diffusion network, social bots have a more indirect influence. Unverified social bot accounts retweet more, and through multiple levels of diffusion, humans are vulnerable to messages manipulated by bots, driving the spread of unverified messages across social media. These findings show that limiting the use of social bots might be an effective method to minimize the spread of conspiracy theories and hate speech online.
    1. In advance of deleting my Twitter account, I made this web page that lets you search my tweets, link to an archived version, and read whole threads I wrote.https://tinysubversions.com/twitter-archive/I will eventually release this as a website I host where you drop your Twitter zip archive in and it spits out the 100% static site you see here. Then you can just upload it somewhere and you have an archive that is also easy to style how you like it.

      https://friend.camp/@darius/109521972924049369

    1. https://www.movetodon.org/

      What a lovely looking UI.

      The data returned will also give one a strong idea of how many of their acquaintances have made the jump as well as how active they may be, particularly for those who moved weeks ago and are still active within the last couple of days. For me the numbers are reasonably large. 860 of 4942 have accounts presently and in scrolling through it appears that 80% or so have been active within a day or so regardless of account age.

    1. Twitter has, like its fellow social media platforms, been working for years to make the process of moderation efficient and systematic enough to function at scale. Not just so the platform isn’t overrun with bots and spam, but in order to comply with legal frameworks like FTC orders and the GDPR.
    1. The trolling is paramount. When former Facebook CSO and Stanford Internet Observatory leader Alex Stamos asked whether Musk would consider implementing his detailed plan for “a trustworthy, neutral platform for political conversations around the world,” Musk responded, “You operate a propaganda platform.” Musk doesn’t appear to want to substantively engage on policy issues: He wants to be aggrieved.
    1. Twitter has never been able to deal with the fact its users both hate using it and also hate each other.
    2. The most typical way users encounter trending content is when a massively viral tweet — or subtweets about that tweet or the discourse it created — enters their feed. Then it’s up to them to figure out what kind of account posted the tweet, what kind of accounts are sharing the tweet, and what accounts are saying about the tweet.
    3. my best guess is it’s the moderation
    1. The presence of Twitter’s code — known as the Twitter advertising pixel — has grown more troublesome since Elon Musk purchased the platform.AdvertisementThat’s because under the terms of Musk’s purchase, large foreign investors were granted special privileges. Anyone who invested $250 million or more is entitled to receive information beyond what lower-level investors can receive. Among the higher-end investors include a Saudi prince’s holding company and a Qatari fund.

      Twitter investors may get access to user data

      I'm surprised but not surprised that Musk's dealings to get investors in his effort to take Twitter private may include sharing of personal data about users. This article makes it sound almost normal that this kind of information-sharing happens with investors (inclusion of the phrase "information beyond what lower-level investors can receive").

    1. Last night I posted a message to both Mastodon and Twitter saying how great M's support for RSS is. Apparently a lot of people on Masto didn't know about it and the response has been resounding. And the numbers are very lopsided. The piece has been "boosted" (the Masto equiv of RT) 1.1K times, yet I only have 3.7K followers there. Meanwhile on Twitter, where I have 69K followers, it has been RTd just 17 times. My feeling was previously that Mastodon was more alive, it's good to have a number to put behind that.

      http://scripting.com/2022/12/03.html#a152558

      Anecdotal evidence for the slow death of Twitter and higher engagement on Mastodon.

    1. The real question isn’t whether platforms like Twitter and Facebook are public squares (because they aren’t), but whether they should be. Should everyone have a right to access these platforms and speak through them the way we all have a right to stand on a soap box downtown and speak through a megaphone? It’s a more complicated ask than we realize—certainly more complicated than those (including Elon Musk himself) who seem to think merely declaring Twitter a public square is sufficient.
    2. This tweet, along with the reinstatement of Donald Trump’s Twitter account, has caused a whirlwind of discussion and debate on the platform—the same arguments about free speech and social media as the “digital public square” that seem to go nowhere, regardless of how often we try. And part of the reason they go nowhere is because the situation is both more simple and more complicated than many of us want to recognize.
    1. What I missed about Mastodon was its very different culture. Ad-driven social media platforms are willing to tolerate monumental volumes of abusive users. They’ve discovered the same thing the Mainstream Media did: negative emotions grip people’s attention harder than positive ones. Hate and fear drives engagement, and engagement drives ad impressions. Mastodon is not an ad-driven platform. There is absolutely zero incentives to let awful people run amok in the name of engagement. The goal of Mastodon is to build a friendly collection of communities, not an attention leeching hate mill. As a result, most Mastodon instance operators have come to a consensus that hate speech shouldn’t be allowed. Already, that sets it far apart from twitter, but wait, there’s more. When it comes to other topics, what is and isn’t allowed is on an instance-by-instance basis, so you can choose your own adventure.

      Attention economy

      Twitter drivers: Hate/fear → Engagement → Impressions → Advertiser money. Since there is no advertising money in Mastodon, it operates on different drivers. Since there is no advertising money, a Mastodon operator isn't driven to get the most impressions. Because there isn't a need to get a high number of impressions, there isn't a need to fuel the hate/fear drivers.

    1. https://www.downes.ca/post/74564

      Stephen Downes is doing a great job of regular recaps on the shifts in Twitter/Mastodon/Fediverse lately. I either read or saw all these in the last couple of days myself.

  7. Nov 2022
    1. The TTRG (time to reply guy) was getting so fast, that I can’t actually remember the last time I tweeted something helpful like a design or development tip. I just couldn’t be arsed, knowing some dickhead would be around to waste my time with whataboutisms and “will it scale”?
    1. Literature, philosophy, film, music, culture, politics, history, architecture: join the circus of the arts and humanities! For readers, writers, academics or anyone wanting to follow the conversation.
    1. hcommons.social is a microblogging network supporting scholars and practitioners across the humanities and around the world.

      https://hcommons.social/about

      The humanities commons has their own mastodon instance now!

    1. Davidson: I think the interface on Mastodon makes me behave differently. If I have a funny joke or a really powerful statement and I want lots of people to hear it, then Twitter’s way better for that right now. However, if something really provokes a big conversation, it’s actually fairly challenging to keep up with the conversation on Twitter. I find that when something gets hundreds of thousands of replies, it’s functionally impossible to even read all of them, let alone respond to all of them. My Twitter personality, like a lot of people’s, is more shouting. Whereas on Mastodon, it’s actually much harder to go viral. There’s no algorithm promoting tweets. It’s just the people you follow. This is the order in which they come. It’s not really set up for that kind of, “Oh my god, everybody’s talking about this one post.” It is set up to foster conversation. I have something like 150,000 followers on Twitter, and I have something like 2,500 on Mastodon, but I have way more substantive conversations on Mastodon even though it’s a smaller audience. I think there’s both design choices that lead to this and also just the vibe of the place where even pointed disagreements are somehow more thoughtful and more respectful on Mastodon.

      Twitter for Shouting; Mastodon for Conversation

      Many, many followers on Twitter makes it hard for conversations to happen, as does the algorithm-driven promotion. Fewer followers and anti-viral UX makes for more conversations even if the reach isn't as far.

    1. The JFK assassination episode of Mad Men. In one long single shot near the beginning of the episode, a character arrives late to his job and finds the office in disarray, desks empty and scattered with suddenly-abandoned papers, and every phone ringing unanswered. Down the hallway at the end of the room, where a TV is blaring just out of sight, we can make out a rising chatter of worried voices, and someone starting to cry. It is— we suddenly remember— a November morning in 1963. The bustling office has collapsed into one anxious body, huddled together around a TV, ignoring the ringing phones, to share in a collective crisis.

      May I just miss the core of this bit entirely and mention coming home to Betty on the couch, letting the kids watch, unsure of what to do.

      And the fucking Campbells, dressed up for a wedding in front of the TV, unsure of what to do.

      Though, if I might add, comparing Twitter to the abstract of television, itself, would be unfortunate, if unfortunately accurate, considering how much more granular the consumptive controls are to the user. Use Twitter Lists, you godforsaken human beings.

    1. Part of what makes Twitter’s potential collapse uniquely challenging is that the “digital public square” has been built on the servers of a private company, says  O’Connor’s colleague Elise Thomas, senior OSINT analyst with the ISD. It’s a problem we’ll have to deal with many times over the coming decades, she says: “This is perhaps the first really big test of that.”

      Public Square content on the servers of a private company

  8. tinysubversions.com tinysubversions.com
    1. A tool that turns Twitter threads into blog posts, by Darius Kazemi.

      https://tinysubversions.com/spooler/

      <small><cite class='h-cite via'> <span class='p-author h-card'>Darius Kazemi</span> in Darius Kazemi: "thread unroller apps" - Friend Camp (<time class='dt-published'>11/16/2022 08:27:44</time>)</cite></small>

    1. Another big, big difference with Mastodon is that it has no algorithmic ranking of posts by popularity, virality, or content. Twitter’s algorithm creates a rich-get-richer effect: Once a tweet goes slightly viral, the algorithm picks up on that, pushes it more prominently into users’ feeds, and bingo: It’s a rogue wave.On Mastodon, in contrast, posts arrive in reverse chronological order. That’s it. If you’re not looking at your feed when a post slides by? You’ll miss it.

      No algorithmic ranking on Mastodon

      To drive the need to make the site sticky and drive ads, Twitter used its algorithmic ranking to find and amplify viral content.

    1. It's not entirely the Twitter people's fault. They've been taught to behave in certain ways. To chase likes and retweets/boosts. To promote themselves. To perform.

      Twitter trains users to behave a certain way. It rewards a specific type of performance. In contrast, until now at least, M is focused on conversation (and the functionality of the apps reinforce that, with how boosts and likes work differently)

    2. It is the very tools and settings that provide so much more agency to users that pundits claim make Mastodon "too complicated".

      Indeed.

    3. I hadn't fully understood — really appreciated — how much corporate publishing systems steer people's behaviour until this week. Twitter encourages a very extractive attitude from everyone it touches.

      This stands out indeed.

    4. Early this week, I realised that some people had cross-posted my Mastodon post into Twitter. Someone else had posted a screenshot of it on Twitter. Nobody thought to ask if I wanted that.

      Author expects to be asked consent before posting their words in another web venue, here crossposting to Twitter. I don't think that's a priori a reasonable expectation. The entire web is a public sphere, and expressions in it are public expressions. Commenting on them, extending on them is annotation, and that's fair game imo. Problems arise from how that annotation is used/positioned. If it's part of the conversation with the author and others that's fine depending on tone e.g. forcefully budding in, yet even if unwelcomed. If it is quoting an author and commenting as performance to one's own audience, then the original author becomes an object, a prop in that performance. That is problematic. I can't judge (no links) here which of the two it is.

    1. or the type of services I offer and my target audience, Twitter is an unlikely place for me to connect with potential clients

      I've seen it mostly as place for finding professional peers, like my blog did. But that is the 2006 perspective, pre-algo. I wrote about FBs toxicity and quit it, I removed LinkedIn timeline. Twitter I did differently: following #'s on Tweetdeck and broadcasting my blogposts. I fight to not be drawn into discussions, unless they're responses to my posts. In the past 4 yrs I have had good conversations on Mastodon. No clients either though, not in my line of work. Some visibility to existing professional network does very much play an active role though.

    2. Pretending Twitter is the answer to gaining respect for and engagement with my work is an addict’s excuse that removes responsibility from myself.

      ouch. The metrics of engagement (likes, rts) make it possible to 'rationalise' this perception of needing it for one's work/career eg.