- Last 7 days
-
jonathanhaidt.com jonathanhaidt.com
-
-
for: annotate, annotate - social media, progress trap - social media
-
source: connectathon 2023 09 23
- session on social media
-
-
- Sep 2023
-
docdrop.org docdrop.org
-
- for: doppleganger, conflict resolution, deep humanity, common denominators, CHD, Douglas Rushkoff, Naomi Klein, Into the Mirror World, conspiracy theory, conspiracy theories, conspiracy culture, nonduality, self-other, human interbeing, polycrisis, othering, storytelling, myth-making, social media amplifier
-summary
- This conversation was insightful on so many dimensions salient to the polycrisis humanity is moving through.
- It makes me think of the old cliches:
- "The more things change, the more they remain the same"
- "What's old is new" ' "History repeats"
- the conversation explores Naomi's latest book (as of this podcast), Into the Mirror World, in which Naomi adopts a different style of writing to explicate, articulate and give voice to
- implicit and tacit discomforting ideas and feelings she experienced during covid and earlier, and
- became a focal point through a personal comparative analysis with another female author and thought leader, Naomi Wolf,
- a feminist writer who ended up being rejected by mainstream media and turned to right wing media.
- The conversation explores the process of:
- othering,
- coopting and
- abandoning
- of ideas important for personal and social wellbeing.
- and speaks to the need to identify what is going on and to reclaim those ideas for the sake of humanity
- In this context, the doppleganger is the people who are mirror-like imiages of ourselves, but on the other side of polarized issues.
- Charismatic leaders who are bad actors often are good at identifying the suffering of the masses, and coopt the ideas of good actors to serve their own ends of self-enrichment.
- There are real world conspiracies that have caused significant societal harm, and still do,
- however, when there ithere are phenomena which we have no direct sense experience of, the mixture of
- a sense of helplessness,
- anger emerging from injustice
- a charismatic leader proposing a concrete, possible but explanatory theory
- is a powerful story whose mythology can be reified by many people believing it
- Another cliche springs to mind
- A lie told a hundred times becomes a truth
- hence the amplifying role of social media
- When we think about where this phenomena manifests, we find it everywhere:
- for: doppleganger, conflict resolution, deep humanity, common denominators, CHD, Douglas Rushkoff, Naomi Klein, Into the Mirror World, conspiracy theory, conspiracy theories, conspiracy culture, nonduality, self-other, human interbeing, polycrisis, othering, storytelling, myth-making, social media amplifier
-summary
Tags
- conspiracy theories
- nonduality
- self-other entanglement
- doppleganger
- common denominators
- CHD
- Naomi Klein
- storytellilng
- social media amplifier
- polycrisis
- othering
- Douglas Rushkoff
- conspiracy theory
- Deep Humanity
- conspiracy culture
- conflict resolution
- myth-making
- human interbeing
- Into the Mirror World
Annotators
URL
-
- Aug 2023
-
factr.com factr.com
-
A social network for "organizing and sharing your knowledge".
-
-
Local file Local file
-
T9 (text prediction):generative AI::handgun:machine gun
-
-
www.pewresearch.org www.pewresearch.org
-
I do expect new social platforms to emerge that focus on privacy and ‘fake-free’ information, or at least they will claim to be so. Proving that to a jaded public will be a challenge. Resisting the temptation to exploit all that data will be extremely hard. And how to pay for it all? If it is subscriber-paid, then only the wealthy will be able to afford it.
- for: quote, quote - Sam Adams, quote - social media
- quote, indyweb - support, people-centered
- I do expect new social platforms to emerge that focus on privacy and ‘fake-free’ information, or at least they will claim to be so.
- Proving that to a jaded public will be a challenge.
- Resisting the temptation to exploit all that data will be extremely hard.
- And how to pay for it all?
- If it is subscriber-paid, then only the wealthy will be able to afford it.
- author: Sam Adams
- 24 year IBM veteran -senior research scientist in AI at RTI International working on national scale knowledge graphs for global good
- comment
- his comment about exploiting all that data is based on an assumption
- a centralized, server data model
- his comment about exploiting all that data is based on an assumption
- this doesn't hold true with a people-centered, person-owned data network such as Inyweb
-
Will members-only, perhaps subscription-based ‘online communities’ reemerge instead of ‘post and we’ll sell your data’ forms of social media? I hope so, but at this point a giant investment would be needed to counter the mega-billions of companies like Facebook!
- for: quote, quote - Janet Salmons, quote - online communities, quote - social media, indyweb - support
- paraphrase
- Will members-only, perhaps subscription-based ‘online communities’ reemerge instead of
- ‘post and we’ll sell your data’ forms of social media?
- I hope so, but at this point a giant investment would be needed to counter the mega-billions of companies like Facebook!
-
-
www.sciencedirect.com www.sciencedirect.com
-
David E. Williams, Spencer P. Greenhalgh. (2022). Pseudonymous academics: Authentic tales from the Twitter trenches. The Internet and Higher Education. Volume 55, October 2022 https://doi.org/10.1016/j.iheduc.2022.100870
-
-
www.pewresearch.org www.pewresearch.org
-
The big tech companies, left to their own devices (so to speak), have already had a net negative effect on societies worldwide. At the moment, the three big threats these companies pose – aggressive surveillance, arbitrary suppression of content (the censorship problem), and the subtle manipulation of thoughts, behaviors, votes, purchases, attitudes and beliefs – are unchecked worldwide
- for: quote, quote - Robert Epstein, quote - search engine bias,quote - future of democracy, quote - tilting elections, quote - progress trap, progress trap, cultural evolution, technology - futures, futures - technology, progress trap, indyweb - support, future - education
- quote
- The big tech companies, left to their own devices , have already had a net negative effect on societies worldwide.
- At the moment, the three big threats these companies pose
- aggressive surveillance,
- arbitrary suppression of content,
- the censorship problem, and
- the subtle manipulation of
- thoughts,
- behaviors,
- votes,
- purchases,
- attitudes and
- beliefs
- are unchecked worldwide
- author: Robert Epstein
- senior research psychologist at American Institute for Behavioral Research and Technology
- paraphrase
- Epstein's organization is building two technologies that assist in combating these problems:
- passively monitor what big tech companies are showing people online,
- smart algorithms that will ultimately be able to identify online manipulations in realtime:
- biased search results,
- biased search suggestions,
- biased newsfeeds,
- platform-generated targeted messages,
- platform-engineered virality,
- shadow-banning,
- email suppression, etc.
- Tech evolves too quickly to be managed by laws and regulations,
- but monitoring systems are tech, and they can and will be used to curtail the destructive and dangerous powers of companies like Google and Facebook on an ongoing basis.
- Epstein's organization is building two technologies that assist in combating these problems:
- reference
- seminar paper on monitoring systems, ‘Taming Big Tech -: https://is.gd/K4caTW.
Tags
- progress trap - Google
- progress trap - search engine
- progress trap - digital technology
- quote -search engine manipulation effect
- SEME
- quote - election bias
- quote - mind control
- search engine manipulation effect
- quote - Robert Epstein
- search engine bias
- quote - tilting elections
- quote - progress trap
- progress trap - social media
- progress trap
- quote SEME
- quote
Annotators
URL
-
-
hackernoon.com hackernoon.com
-
- for: titling elections, voting - social media, voting - search engine bias, SEME, search engine manipulation effect, Robert Epstein
- summary
- research that shows how search engines can actually bias towards a political candidate in an election and tilt the election in favor of a particular party.
-
In our early experiments, reported by The Washington Post in March 2013, we discovered that Google’s search engine had the power to shift the percentage of undecided voters supporting a political candidate by a substantial margin without anyone knowing.
- for: search engine manipulation effect, SEME, voting, voting - bias, voting - manipulation, voting - search engine bias, democracy - search engine bias, quote, quote - Robert Epstein, quote - search engine bias, stats, stats - tilting elections
- paraphrase
- quote
- In our early experiments, reported by The Washington Post in March 2013,
- we discovered that Google’s search engine had the power to shift the percentage of undecided voters supporting a political candidate by a substantial margin without anyone knowing.
- 2015 PNAS research on SEME
- http://www.pnas.org/content/112/33/E4512.full.pdf?with-ds=yes&ref=hackernoon.com
- stats begin
- search results favoring one candidate
- could easily shift the opinions and voting preferences of real voters in real elections by up to 80 percent in some demographic groups
- with virtually no one knowing they had been manipulated.
- stats end
- Worse still, the few people who had noticed that we were showing them biased search results
- generally shifted even farther in the direction of the bias,
- so being able to spot favoritism in search results is no protection against it.
- stats begin
- Google’s search engine
- with or without any deliberate planning by Google employees
- was currently determining the outcomes of upwards of 25 percent of the world’s national elections.
- This is because Google’s search engine lacks an equal-time rule,
- so it virtually always favors one candidate over another, and that in turn shifts the preferences of undecided voters.
- Because many elections are very close, shifting the preferences of undecided voters can easily tip the outcome.
- stats end
-
What if, early in the morning on Election Day in 2016, Mark Zuckerberg had used Facebook to broadcast “go-out-and-vote” reminders just to supporters of Hillary Clinton? Extrapolating from Facebook’s own published data, that might have given Mrs. Clinton a boost of 450,000 votes or more, with no one but Mr. Zuckerberg and a few cronies knowing about the manipulation.
- for: Hiliary Clinton could have won, voting, democracy, voting - social media, democracy - social media, election - social media, facebook - election, 2016 US elections, 2016 Trump election, 2016 US election, 2016 US election - different results, 2016 election - social media
- interesting fact
- If Facebook had sent a "Go out and vote" message on election day of 2016 election, Clinton may have had a boost of 450,000 additional votes
- and the outcome of the election might have been different
- If Facebook had sent a "Go out and vote" message on election day of 2016 election, Clinton may have had a boost of 450,000 additional votes
Tags
- voting - social media
- election - social media
- Trump could have lost
- SEME
- PNAS SEME study
- 2016 US election - different results
- stats - tilting elections
- facebook - election
- quote - search engine bias
- search engine manipulation effect
- quote - Robert Epstein
- democracy - social media
- search engine bias
- 2016 US election
- elections - bias
- stats
- Robert Epstein
- elections - interference
- Washington Post story - search engine bias
- voting - search engine bias
- democracy
- quote
- Hilary Clinton could have won
- democracy - search engine bias
- voting
Annotators
URL
-
- Jul 2023
-
acecomments.mu.nu acecomments.mu.nu
-
As Threads "soars", Bluesky and Mastodon are adopting algorithmic feeds. (Tech Crunch) You will eat the bugs. You will live in the pod. You will read what we tell you. You will own nothing and we don't much care if you are happy.
Applying the WEF meme about pods and bugs to Threads inspiring Bluesky and one Mastodon app to push algorithmic feeds.
Tags
Annotators
URL
-
-
academic.oup.com academic.oup.com
-
specific uses of the technology help develop what we call “relational confidence,” or the confidence that one has a close enough relationship to a colleague to ask and get needed knowledge. With greater relational confidence, knowledge sharing is more successful.
-
-
euobserver.com euobserver.com
-
Not that an E2E rule precludes algorithmic feeds: remember, E2E is the idea that you see what you ask to see. If a user opts into a feed that promotes content that they haven't subscribed to at the expense of the things they explicitly asked to see, that's their choice. But it's not a choice that social media services reliably offer, which is how they are able to extract ransom payments from publishers.
I don't understand how you could audit this, unless you had to force a default of chronological presentation of posts etc.
Tags
Annotators
URL
-
-
www.youtube.com www.youtube.com
-
the folly of endless bla, bla bla, people viewing the mind as a big boy, while in reality, it is a little boy who is undisciplined and goes on random rants and tangents, liking and disliking everything it sees on social-media
Tags
Annotators
URL
-
-
babylonbee.com babylonbee.com
-
"After years of research, our engineers have created a revolution in social media technology: a Twitter clone on Instagram that offers the absolute worst of both worlds," said a VR headset-wearing Zuckerberg in an address to dozens of friends in the Metaverse. "At long last, you can read caustic hot takes written by talentless idiots, while still enjoying oppressive censorship and sepia-toned thirst traps from yoga pants models with obnoxious lip injections. You're welcome!"
Babylon Bee article with made up Mark Zuckerberg quote touting the virtues of Threads. This is some of the Bee's finest writing and not at all inaccurate.
-
- Jun 2023
-
www.youtube.com www.youtube.com
-
(14:20-19:00) Dopamine Prediction Error is explained by Andrew Huberman in the following way: When we anticipate something exciting dopamine levels rise and rise, but when we fail it drops below baseline, decreasing motivation and drive immensely, sometimes even causing us to get sad. However, when we succeed, dopamine rises even higher, increasing our drive and motivation significantly... This is the idea that successes build upon each other, and why celebrating the "marginal gains" is a very powerful tool to build momentum and actually make progress. Surprise increases this effect even more: big dopamine hit, when you don't anticipate it.
Social Media algorithms make heavy use of this principle, therefore enslaving its user, in particular infinite scrolling platforms such as TikTok... Your dopamine levels rise as you're looking for that one thing you like, but it drops because you don't always have that one golden nugget. Then it rises once in a while when you find it. This contrast creates an illusion of enjoyment and traps the user in an infinite search of great content, especially when it's shortform. It makes you waste time so effectively. This is related to getting the success mindset of preferring delayed gratification over instant gratification.
It would be useful to reflect and introspect on your dopaminic baseline, and see what actually increases and decreases your dopamine, in addition to whether or not these things help to achieve your ambitions. As a high dopaminic baseline (which means your dopamine circuit is getting used to high hits from things as playing games, watching shortform content, watching porn) decreases your ability to focus for long amounts of time (attention span), and by extent your ability to learn and eventually reach success. Studying and learning can actually be fun, if your dopamine levels are managed properly, meaning you don't often engage in very high-dopamine emitting activities. You want your brain to be used to the low amounts of dopamine that studying gives. A framework to help with this reflection would be Kolb's.
A short-term dopamine reset is to not use the tool or device for about half an hour to an hour (or do NSDR). However, this is not a long-term solution.
-
-
www.marginalia.nu www.marginalia.nu
- May 2023
-
nostr.com nostr.com
-
Nostr is a simple, open protocol that enables global, decentralized, and censorship-resistant social media.
Peter Kominski likes this generally.
-
-
-
Trakt DataRecoveryIMPORTANTOn December 11 at 7:30 pm PST our main database crashed and corrupted some of the data. We're deeply sorry for the extended downtime and we'll do better moving forward. Updates to our automated backups are already in place and they will be tested on an ongoing basis.Data prior to November 7 is fully restored.Watched history between November 7 and Decmber 11 has been recovered. There is a separate message on your dashboard allowing you to review and import any recovered data.All other data (besides watched history) after November 7 has already been restored and imported.Some data might be permanently lost due to data corruption.Trakt API is back online as of December 20.Active VIP members will get 2 free months added to their expiration date
From late 2022
Tags
Annotators
URL
-
-
atomicbooks.com atomicbooks.com
-
https://atomicbooks.com/pages/john-waters-fan-mail
John Waters receives fan mail via Atomic Books in Baltimore, MD.
-
-
firesky.tv firesky.tvFiresky1
Tags
Annotators
URL
-
- Apr 2023
-
www.reddit.com www.reddit.com
-
Benefits of sharing permanent notes .t3_12gadut._2FCtq-QzlfuN-SwVMUZMM3 { --postTitle-VisitedLinkColor: #9b9b9b; --postTitleLink-VisitedLinkColor: #9b9b9b; --postBodyLink-VisitedLinkColor: #989898; }
reply to u/bestlunchtoday at https://www.reddit.com/r/Zettelkasten/comments/12gadut/benefits_of_sharing_permanent_notes/
I love the diversity of ideas here! So many different ways to do it all and perspectives on the pros/cons. It's all incredibly idiosyncratic, just like our notes.
I probably default to a far extreme of sharing the vast majority of my notes openly to the public (at least the ones taken digitally which account for probably 95%). You can find them here: https://hypothes.is/users/chrisaldrich.
Not many people notice or care, but I do know that a small handful follow and occasionally reply to them or email me questions. One or two people actually subscribe to them via RSS, and at least one has said that they know more about me, what I'm reading, what I'm interested in, and who I am by reading these over time. (I also personally follow a handful of people and tags there myself.) Some have remarked at how they appreciate watching my notes over time and then seeing the longer writing pieces they were integrated into. Some novice note takers have mentioned how much they appreciate being able to watch such a process of note taking turned into composition as examples which they might follow. Some just like a particular niche topic and follow it as a tag (so if you were interested in zettelkasten perhaps?) Why should I hide my conversation with the authors I read, or with my own zettelkasten unless it really needed to be private? Couldn't/shouldn't it all be part of "The Great Conversation"? The tougher part may be having means of appropriately focusing on and sharing this conversation without some of the ills and attention economy practices which plague the social space presently.
There are a few notes here on this post that talk about social media and how this plays a role in making them public or not. I suppose that if I were putting it all on a popular platform like Twitter or Instagram then the use of the notes would be or could be considered more performative. Since mine are on what I would call a very quiet pseudo-social network, but one specifically intended for note taking, they tend to be far less performative in nature and the majority of the focus is solely on what I want to make and use them for. I have the opportunity and ability to make some private and occasionally do so. Perhaps if the traffic and notice of them became more prominent I would change my habits, but generally it has been a net positive to have put my sensemaking out into the public, though I will admit that I have a lot of privilege to be able to do so.
Of course for those who just want my longer form stuff, there's a website/blog for that, though personally I think all the fun ideas at the bleeding edge are in my notes.
Since some (u/deafpolygon, u/Magnifico99, and u/thiefspy; cc: u/FastSascha, u/A_Dull_Significance) have mentioned social media, Instagram, and journalists, I'll share a relevant old note with an example, which is also simultaneously an example of the benefit of having public notes to be able to point at, which u/PantsMcFail2 also does here with one of Andy Matuschak's public notes:
[Prominent] Journalist John Dickerson indicates that he uses Instagram as a commonplace: https://www.instagram.com/jfdlibrary/ here he keeps a collection of photo "cards" with quotes from famous people rather than photos. He also keeps collections there of photos of notes from scraps of paper as well as photos of annotations he makes in books.
It's reasonably well known that Ronald Reagan shared some of his personal notes and collected quotations with his speechwriting staff while he was President. I would say that this and other similar examples of collaborative zettelkasten or collaborative note taking and their uses would blunt u/deafpolygon's argument that shared notes (online or otherwise) are either just (or only) a wiki. The forms are somewhat similar, but not all exactly the same. I suspect others could add to these examples.
And of course if you've been following along with all of my links, you'll have found yourself reading not only these words here, but also reading some of a directed conversation with entry points into my own personal zettelkasten, which you can also query as you like. I hope it has helped to increase the depth and level of the conversation, should you choose to enter into it. It's an open enough one that folks can pick and choose their own path through it as their interests dictate.
-
-
on.substack.com on.substack.com
-
Introducing Substack Notes<br /> by Hamish McKenzie, Chris Best, Jairaj Sethi
-
In Notes, writers will be able to post short-form content and share ideas with each other and their readers. Like our Recommendations feature, Notes is designed to drive discovery across Substack. But while Recommendations lets writers promote publications, Notes will give them the ability to recommend almost anything—including posts, quotes, comments, images, and links.
Substack slowly adding features and functionality to make them a full stack blogging/social platform... first long form, then short note features...
Also pushing in on Twitter's lunch as Twitter is having issues.
-
- Mar 2023
-
web.archive.org web.archive.org
-
Die schiere Menge sprengt die Möglichkeiten der Buchpublikation, die komplexe, vieldimensionale Struktur einer vernetzten Informationsbasis ist im Druck nicht nachzubilden, und schließlich fügt sich die Dynamik eines stetig wachsenden und auch stetig zu korrigierenden Materials nicht in den starren Rhythmus der Buchproduktion, in der jede erweiterte und korrigierte Neuauflage mit unübersehbarem Aufwand verbunden ist. Eine Buchpublikation könnte stets nur die Momentaufnahme einer solchen Datenbank, reduziert auf eine bestimmte Perspektive, bieten. Auch das kann hin und wieder sehr nützlich sein, aber dadurch wird das Problem der Publikation des Gesamtmaterials nicht gelöst.
link to https://hypothes.is/a/U95jEs0eEe20EUesAtKcuA
Is this phenomenon of "complex narratives" related to misinformation spread within the larger and more complex social network/online network? At small, local scales, people know how to handle data and information which is locally contextualized for them. On larger internet-scale communication social platforms this sort of contextualization breaks down.
For a lack of a better word for this, let's temporarily refer to it as "complex narratives" to get a handle on it.
-
-
www.ndss-symposium.org www.ndss-symposium.org
- Feb 2023
-
-
Related here is the horcrux problem of note taking or even social media. The mental friction of where did I put that thing? As a result, it's best to put it all in one place.
How can you build on a single foundation if you're in multiple locations? The primary (only?) benefit of multiple locations is redundancy in case of loss.
Ryan Holiday and Robert Greene are counter examples, though Greene's books are distinct projects generally while Holiday's work has a lot of overlap.
-
-
aeon.co aeon.co
-
If Seneca or Martial were around today, they would probably write sarcastic epigrams about the very public exhibition of reading text messages and in-your-face displays of texting. Digital reading, like the perusing of ancient scrolls, constitutes an important statement about who we are. Like the public readers of Martial’s Rome, the avid readers of text messages and other forms of social media appear to be everywhere. Though in both cases the performers of reading are tirelessly constructing their self-image, the identity they aspire to establish is very different. Young people sitting in a bar checking their phones for texts are not making a statement about their refined literary status. They are signalling that they are connected and – most importantly – that their attention is in constant demand.
-
-
www.washingtonpost.com www.washingtonpost.com
-
Internet ‘algospeak’ is changing our language in real time, from ‘nip nops’ to ‘le dollar bean’ by [[Taylor Lorenz]]
shifts in language and meaning of words and symbols as the result of algorithmic content moderation
instead of slow semantic shifts, content moderation is actively pushing shifts of words and their meanings
article suggested by this week's Dan Allosso Book club on Pirate Enlightenment
-
Could it be the sift from person to person (known in both directions) to massive broadcast that is driving issues with content moderation. When it's person to person, one can simply choose not to interact and put the person beyond their individual pale. This sort of shunning is much harder to do with larger mass publics at scale in broadcast mode.
How can bringing content moderation back down to the neighborhood scale help in the broadcast model?
-
In January, Kendra Calhoun, a postdoctoral researcher in linguistic anthropology at UCLA, and Alexia Fawcett, a doctoral student in linguistics at UC Santa Barbara, gave a presentation about language on TikTok. They outlined how, by self-censoring words in the captions of TikToks, new algospeak code words emerged.
follow up on this for the relevant forthcoming paper....
-
“It makes me feel like I need a disclaimer because I feel like it makes you seem unprofessional to have these weirdly spelled words in your captions,” she said, “especially for content that's supposed to be serious and medically inclined.”
Where's the balance for professionalism with respect to dodging the algorithmic filters for serious health-related conversations online?
-
But algorithmic content moderation systems are more pervasive on the modern Internet, and often end up silencing marginalized communities and important discussions.
What about non-marginalized toxic communities like Neo-Nazis?
-
Unlike other mainstream social platforms, the primary way content is distributed on TikTok is through an algorithmically curated “For You” page; having followers doesn’t guarantee people will see your content. This shift has led average users to tailor their videos primarily toward the algorithm, rather than a following, which means abiding by content moderation rules is more crucial than ever.
Social media has slowly moved away from communication between people who know each other to people who are farther apart in social spaces. Increasingly in 2021 onward, some platforms like TikTok have acted as a distribution platform and ignored explicit social connections like follower/followee in lieu of algorithmic-only feeds to distribute content to people based on a variety of criteria including popularity of content and the readers' interests.
Tags
- coded language
- Alexia Fawcett
- cancel culture
- neo-Nazis
- Kendra Calhoun
- social media machine guns
- linguistics
- algorithmic feeds
- dialects
- content moderation
- social media
- shunning
- dialect creation
- marginalized groups
- health care
- social media history
- cultural taboos
- leetspeak
- broadcasting models
- public health
- euphemisms
- demonitization
- human computer interaction
- colloquialisms
- beyond the pale
- algospeak
- cultural anthropology
- historical linguistics
- misinformation
- Voldemorting
- TikTok
Annotators
URL
-
-
www.thecrimson.com www.thecrimson.com
-
https://www.thecrimson.com/article/2023/2/2/donovan-forced-leave-hks/
This is a massive loss for HKS, but a potential major win for the school that picks the project up.
It seems to be a sad use of "rules" to shut down a project which may not jive with an administrations' perspective/needs.
Read on Fri 2023-02-03 at 7:14 PM
-
-
www.heise.de www.heise.de
-
Man kann die ganze Situation nämlich auch einmal zum Anlass nehmen, darüber nachzudenken, ob man das Ganze wirklich braucht. Ist der Nutzen der sozialen Medien so hoch, dass er den Preis rechtfertigt? Das ist eine Frage, die ich mir stelle, seit ich meinen persönlichen Twitter-Account stillgelegt habe, aber so verkehrt fühlt es sich zumindest für mich nicht an, nicht mehr auf Twitter, Mastodon & Co. vertreten zu sein. Vielleicht hatte ein solcher Dienst auch einfach seine Zeit, und vielleicht überschätzen wir die Relevanz von sozialen Medien, und vielleicht wäre es gut, davon mehr Abstand zu nehmen.
-
-
www.reddit.com www.reddit.com
-
One can find utility in asking questions of their own note box, but why not also leverage the utility of a broader audience asking questions of it as well?!
One of the values of social media is that it can allow you to practice or rehearse the potential value of ideas and potentially getting useful feedback on individual ideas which you may be aggregating into larger works.
-
- Jan 2023
-
twitterisgoinggreat.com twitterisgoinggreat.com
-
ncase.itch.io ncase.itch.io
-
We become what we behold, a game by Nicky Case.
A commentary on news cycles and social media.
-
-
cohost.org cohost.orgcohost!1
-
social media platform
This technical jargon, in the context of Cohost.org, means "a website".
Tags
Annotators
URL
-
-
-
is zettelkasten gamification of note-taking? .t3_zkguan._2FCtq-QzlfuN-SwVMUZMM3 { --postTitle-VisitedLinkColor: #9b9b9b; --postTitleLink-VisitedLinkColor: #9b9b9b; --postBodyLink-VisitedLinkColor: #989898; }
reply to u/theinvertedform at https://www.reddit.com/r/Zettelkasten/comments/zkguan/is_zettelkasten_gamification_of_notetaking/
Social media and "influencers" have certainly grabbed onto the idea and squeezed with both hands. Broadly while talking about their own versions of rules, tips, tricks, and tools, they've missed a massive history of the broader techniques which pervade the humanities for over 500 years. When one looks more deeply at the broader cross section of writers, educators, philosophers, and academics who have used variations on the idea of maintaining notebooks or commonplace books, it becomes a relative no-brainer that it is a useful tool. I touch on some of the history as well as some of the recent commercialization here: https://boffosocko.com/2022/10/22/the-two-definitions-of-zettelkasten/.
-
- Dec 2022
-
arstechnica.com arstechnica.com
-
"Queer people built the Fediverse," she said, adding that four of the five authors of the ActivityPub standard identify as queer. As a result, protections against undesired interaction are built into ActivityPub and the various front ends. Systems for blocking entire instances with a culture of trolling can save users the exhausting process of blocking one troll at a time. If a post includes a “summary” field, Mastodon uses that summary as a content warning.
-
-
fedvte.usalearning.gov fedvte.usalearning.gov
-
Investigating social structures through the use of network or graphs Networked structures Usually called nodes ((individual actors, people, or things within the network) Connections between nodes: Edges or Links Focus on relationships between actors in addition to the attributes of actors Extensively used in mapping out social networks (Twitter, Facebook) Examples: Palantir, Analyst Notebook, MISP and Maltego
-
-
www.sciencedirect.com www.sciencedirect.com
-
Drawing from negativity bias theory, CFM, ICM, and arousal theory, this study characterizes the emotional responses of social media users and verifies how emotional factors affect the number of reposts of social media content after two natural disasters (predictable and unpredictable disasters). In addition, results from defining the influential users as those with many followers and high activity users and then characterizing how they affect the number of reposts after natural disasters
-
-
psycnet.apa.org psycnet.apa.org
-
Using actual fake-news headlines presented as they were seen on Facebook, we show that even a single exposure increases subsequent perceptions of accuracy, both within the same session and after a week. Moreover, this “illusory truth effect” for fake-news headlines occurs despite a low level of overall believability and even when the stories are labeled as contested by fact checkers or are inconsistent with the reader’s political ideology. These results suggest that social media platforms help to incubate belief in blatantly false news stories and that tagging such stories as disputed is not an effective solution to this problem.
-
-
www.nature.com www.nature.com
-
. Furthermore, our results add to the growing body of literature documenting—at least at this historical moment—the link between extreme right-wing ideology and misinformation8,14,24 (although, of course, factors other than ideology are also associated with misinformation sharing, such as polarization25 and inattention17,37).
Misinformation exposure and extreme right-wing ideology appear associated in this report. Others find that it is partisanship that predicts susceptibility.
-
. We also find evidence of “falsehood echo chambers”, where users that are more often exposed to misinformation are more likely to follow a similar set of accounts and share from a similar set of domains. These results are interesting in the context of evidence that political echo chambers are not prevalent, as typically imagined
-
And finally, at the individual level, we found that estimated ideological extremity was more strongly associated with following elites who made more false or inaccurate statements among users estimated to be conservatives compared to users estimated to be liberals. These results on political asymmetries are aligned with prior work on news-based misinformation sharing
This suggests the misinformation sharing elites may influence whether followers become more extreme. There is little incentive not to stoke outrage as it improves engagement.
-
Estimated ideological extremity is associated with higher elite misinformation-exposure scores for estimated conservatives more so than estimated liberals.
Political ideology is estimated using accounts followed10. b Political ideology is estimated using domains shared30 (Red: conservative, blue: liberal). Source data are provided as a Source Data file.
Estimated ideological extremity is associated with higher language toxicity and moral outrage scores for estimated conservatives more so than estimated liberals.
The relationship between estimated political ideology and (a) language toxicity and (b) expressions of moral outrage. Extreme values are winsorized by 95% quantile for visualization purposes. Source data are provided as a Source Data file.
-
In the co-share network, a cluster of websites shared more by conservatives is also shared more by users with higher misinformation exposure scores.
Nodes represent website domains shared by at least 20 users in our dataset and edges are weighted based on common users who shared them. a Separate colors represent different clusters of websites determined using community-detection algorithms29. b The intensity of the color of each node shows the average misinformation-exposure score of users who shared the website domain (darker = higher PolitiFact score). c Nodes’ color represents the average estimated ideology of the users who shared the website domain (red: conservative, blue: liberal). d The intensity of the color of each node shows the average use of language toxicity by users who shared the website domain (darker = higher use of toxic language). e The intensity of the color of each node shows the average expression of moral outrage by users who shared the website domain (darker = higher expression of moral outrage). Nodes are positioned using directed-force layout on the weighted network.
-
Exposure to elite misinformation is associated with the use of toxic language and moral outrage.
Shown is the relationship between users’ misinformation-exposure scores and (a) the toxicity of the language used in their tweets, measured using the Google Jigsaw Perspective API27, and (b) the extent to which their tweets involved expressions of moral outrage, measured using the algorithm from ref. 28. Extreme values are winsorized by 95% quantile for visualization purposes. Small dots in the background show individual observations; large dots show the average value across bins of size 0.1, with size of dots proportional to the number of observations in each bin. Source data are provided as a Source Data file.
-
-
www.nature.com www.nature.com
-
Exposure to elite misinformation is associated with sharing news from lower-quality outlets and with conservative estimated ideology.
Shown is the relationship between users’ misinformation-exposure scores and (a) the quality of the news outlets they shared content from, as rated by professional fact-checkers21, (b) the quality of the news outlets they shared content from, as rated by layperson crowds21, and (c) estimated political ideology, based on the ideology of the accounts they follow10. Small dots in the background show individual observations; large dots show the average value across bins of size 0.1, with size of dots proportional to the number of observations in each bin.
-
-
arxiv.org arxiv.org
-
Notice that Twitter’s account purge significantly impacted misinformation spread worldwide: the proportion of low-credible domains in URLs retweeted from U.S. dropped from 14% to 7%. Finally, despite not having a list of low-credible domains in Russian, Russia is central in exporting potential misinformation in the vax rollout period, especially to Latin American countries. In these countries, the proportion of low-credible URLs coming from Russia increased from 1% in vax development to 18% in vax rollout periods (see Figure 8 (b), Appendix).
-
Interestingly, the fraction of low-credible URLs coming from U.S. dropped from 74% in the vax devel-opment period to 55% in the vax rollout. This large decrease can be directly ascribed to Twitter’s moderationpolicy: 46% of cross-border retweets of U.S. users linking to low-credible websites in the vax developmentperiod came from accounts that have been suspended following the U.S. Capitol attack (see Figure 8 (a), Ap-pendix).
-
Considering the behavior of users in no-vax communities,we find that they are more likely to retweet (Figure 3(a)), share URLs (Figure 3(b)), and especially URLs toYouTube (Figure 3(c)) than other users. Furthermore, the URLs they post are much more likely to be fromlow-credible domains (Figure 3(d)), compared to those posted in the rest of the networks. The differenceis remarkable: 26.0% of domains shared in no-vax communities come from lists of known low-credibledomains, versus only 2.4% of those cited by other users (p < 0.001). The most common low-crediblewebsites among the no-vax communities are zerohedge.com, lifesitenews.com, dailymail.co.uk (consideredright-biased and questionably sourced) and childrenshealthdefense.com (conspiracy/pseudoscience)
-
-
ieeexplore.ieee.org ieeexplore.ieee.org
-
We applied two scenarios to compare how these regular agents behave in the Twitter network, with and without malicious agents, to study how much influence malicious agents have on the general susceptibility of the regular users. To achieve this, we implemented a belief value system to measure how impressionable an agent is when encountering misinformation and how its behavior gets affected. The results indicated similar outcomes in the two scenarios as the affected belief value changed for these regular agents, exhibiting belief in the misinformation. Although the change in belief value occurred slowly, it had a profound effect when the malicious agents were present, as many more regular agents started believing in misinformation.
-
-
www.mdpi.com www.mdpi.com
-
Therefore, although the social bot individual is “small”, it has become a “super spreader” with strategic significance. As an intelligent communication subject in the social platform, it conspired with the discourse framework in the mainstream media to form a hybrid strategy of public opinion manipulation.
-
There were 120,118 epidemy-related tweets in this study, and 34,935 Twitter accounts were detected as bot accounts by Botometer, accounting for 29%. In all, 82,688 Twitter accounts were human, accounting for 69%; 2495 accounts had no bot score detected.In social network analysis, degree centrality is an index to judge the importance of nodes in the network. The nodes in the social network graph represent users, and the edges between nodes represent the connections between users. Based on the network structure graph, we may determine which members of a group are more influential than others. In 1979, American professor Linton C. Freeman published an article titled “Centrality in social networks conceptual clarification“, on Social Networks, formally proposing the concept of degree centrality [69]. Degree centrality denotes the number of times a central node is retweeted by other nodes (or other indicators, only retweeted are involved in this study). Specifically, the higher the degree centrality is, the more influence a node has in its network. The measure of degree centrality includes in-degree and out-degree. Betweenness centrality is an index that describes the importance of a node by the number of shortest paths through it. Nodes with high betweenness centrality are in the “structural hole” position in the network [69]. This kind of account connects the group network lacking communication and can expand the dialogue space of different people. American sociologist Ronald S. Bert put forward the theory of a “structural hole” and said that if there is no direct connection between the other actors connected by an actor in the network, then the actor occupies the “structural hole” position and can obtain social capital through “intermediary opportunities”, thus having more advantages.
-
We analyzed and visualized Twitter data during the prevalence of the Wuhan lab leak theory and discovered that 29% of the accounts participating in the discussion were social bots. We found evidence that social bots play an essential mediating role in communication networks. Although human accounts have a more direct influence on the information diffusion network, social bots have a more indirect influence. Unverified social bot accounts retweet more, and through multiple levels of diffusion, humans are vulnerable to messages manipulated by bots, driving the spread of unverified messages across social media. These findings show that limiting the use of social bots might be an effective method to minimize the spread of conspiracy theories and hate speech online.
-
-
www.robinsloan.com www.robinsloan.com
-
I want to insist on an amateur internet; a garage internet; a public library internet; a kitchen table internet.
Social media should be comprised of people from end to end. Corporate interests inserted into the process can only serve to dehumanize the system.
Robin Sloan is in the same camp as Greg McVerry and I.
-
-
atproto.com atproto.com
-
www.getrevue.co www.getrevue.co
-
-
pluralistic.net pluralistic.net
-
Alas, lawmakers are way behind the curve on this, demanding new "online safety" rules that require firms to break E2E and block third-party de-enshittification tools: https://www.openrightsgroup.org/blog/online-safety-made-dangerous/ The online free speech debate is stupid because it has all the wrong focuses: Focusing on improving algorithms, not whether you can even get a feed of things you asked to see; Focusing on whether unsolicited messages are delivered, not whether solicited messages reach their readers; Focusing on algorithmic transparency, not whether you can opt out of the behavioral tracking that produces training data for algorithms; Focusing on whether platforms are policing their users well enough, not whether we can leave a platform without losing our important social, professional and personal ties; Focusing on whether the limits on our speech violate the First Amendment, rather than whether they are unfair: https://doctorow.medium.com/yes-its-censorship-2026c9edc0fd
This list is particularly good.
Proper regulation of end to end services would encourage the creation of filtering and other tools which would tend to benefit users rather than benefit the rent seeking of the corporations which own the pipes.
-
-
www.garbageday.email www.garbageday.email
-
my best guess is it’s the moderation
-
-
rhiaro.co.uk rhiaro.co.uk
-
I'd love it to be normal and everyday to not assume that when you post a message on your social network, every person is reading it in a similar UI, either to the one you posted from, or to the one everyone else is reading it in.
🤗
-
-
a.gup.pe a.gup.pe
-
[https://a.gup.pe/ Guppe Groups] a group of bot accounts that can be used to aggregate social groups within the [[fediverse]] around a variety of topics like [[crafts]], books, history, philosophy, etc.
Tags
Annotators
URL
-
-
zephoria.medium.com zephoria.medium.com
-
-
A lot has changed about our news media ecosystem since 2007. In the United States, it’s hard to overstate how the media is entangled with contemporary partisan politics and ideology. This means that information tends not to flow across partisan divides in coherent ways that enable debate.
Our media and social media systems have been structured along with the people who use them such that debate is stifled because information doesn't flow coherently across the political partisan divide.
-
-
blog.jonudell.net blog.jonudell.net
-
Humans didn’t evolve to thrive in frictionless social networks with high fanout and velocity, and arguably we shouldn’t.
-
-
oulipo.social oulipo.social
-
https://oulipo.social/about
Social media without the letter "e".
-
-
beesbuzz.biz beesbuzz.biz
-
blog.erinshepherd.net blog.erinshepherd.net
-
-
The trust one must place in the creator of a blocklist is enormous, because the most dangerous failure mode isn’t that it doesn’t block who it says it does, but that it blocks who it says it doesn’t and they just disappear.
-
-
research.google research.google
-
<small><cite class='h-cite via'>ᔥ <span class='p-author h-card'>Erin Alexis Owen Shepherd</span> in A better moderation system is possible for the social web (<time class='dt-published'>12/03/2022 11:10:32</time>)</cite></small>
-
-
www.noemamag.com www.noemamag.com
-
“The damage commercial social media has done to politics, relationships and the fabric of society needs undoing.
-
As users begin migrating to the noncommercial fediverse, they need to reconsider their expectations for social media — and bring them in line with what we expect from other arenas of social life. We need to learn how to become more like engaged democratic citizens in the life of our networks.
-
-
-
I have about fourteen or sixteen weeks to do this, so I'm breaking the course into an "intro" section that covers some basic stuff like affordances, and other insights into how tech functions. There's a section on AI which is nothing but critical appraisals on AI from a variety of areas. And there's a section on Social Media, which is the most well formed section in terms of readings.
https://zirk.us/@shengokai/109440759945863989
If the individuals in an environment don't understand or perceive the affordances available to them, can the interactions between them and the environment make it seem as if the environment possesses agency?
cross reference: James J. Gibson book The Senses Considered as Perceptual Systems (1966)
People often indicate that social media "causes" outcomes among groups of people who use it. Eg: Social media (via algorithmic suggestions of fringe content) causes people to become radicalized.
-
- Nov 2022
-
andy-bell.co.uk andy-bell.co.uk
-
The TTRG (time to reply guy) was getting so fast, that I can’t actually remember the last time I tweeted something helpful like a design or development tip. I just couldn’t be arsed, knowing some dickhead would be around to waste my time with whataboutisms and “will it scale”?
-
-
www.washingtonpost.com www.washingtonpost.com
-
The Post analyzed data from ProPublica’s Represent tool, which tracks congressional Twitter activity.
-
-
community.interledger.org community.interledger.org
-
11/30 Youth Collaborative
I went through some of the pieces in the collection. It is important to give a platform to the voices that are missing from the conversation usually.
Just a few similar initiatives that you might want to check out:
Storycorps - people can record their stories via an app
Project Voice - spoken word poetry
Living Library - sharing one's story
Freedom Writers - book and curriculum based on real-life stories
-
-
www.theatlantic.com www.theatlantic.com
-
As part of the Election Integrity Partnership, my team at the Stanford Internet Observatory studies online rumors, and how they spread across the internet in real time.
-
-
www.zylstra.org www.zylstra.org
-
socialmediaissues.net socialmediaissues.net
-
https://socialmediaissues.net/
Website for Social Media Issues, A resource for Comm 182/282. A course offered by Howard Rheingold at Stanford, Autumn, 2013
-
-
wiki.laptop.org wiki.laptop.org
-
cohost.org cohost.orgcohost!1
-
lucahammer.com lucahammer.com
-
the-federation.info the-federation.info
-
An aggregation site with data about the broader Fediverse and projects within it.
Tags
Annotators
URL
-
-
morningconsult.com morningconsult.com
-
The notable exception: social media companies. Gen Zers are more likely to trust social media companies to handle their data properly than older consumers, including millennials, are.
Gen-Z is more trusting of data handling by social media companies
For most categories of businesses, Gen Z adults are less likely to trust a business to protect the privacy of their data as compared to other generations. Social media is the one exception.
-
-
masto.host masto.host
-
https://zettelkasten.social/about
Someone has registered the domain and it is hosted by masto.host, but not yet active as of 2022-11-13
Tags
Annotators
URL
-
-
blog.archive.org blog.archive.org
-
Looking forward to many social media alternatives: Blue Sky, Matrix, and many others.
If wishing only made it happen...
-
-
threadreaderapp.com threadreaderapp.com
-
fedified.com fedified.com
-
meh... This looks dreadful...
Why not just use the built in rel-me verification available in Twitter directly with respect to individual websites?
-
-
tracydurnell.com tracydurnell.com
-
doctorow.medium.com doctorow.medium.com
-
pruvisto.org pruvisto.org
-
https://pruvisto.org/debirdify/
Tool for moving some of your Twitter data over to Mastodon or other parts of the Fediverse.
-
-
theconversation.com theconversation.com
-
-
Any migration is likely to face many of the challenges previous platform migrations have faced: content loss, fragmented communities, broken social networks and shifted community norms.
-
By asking participants about their experiences moving across these platforms – why they left, why they joined and the challenges they faced in doing so – we gained insights into factors that might drive the success and failure of platforms, as well as what negative consequences are likely to occur for a community when it relocates.
-
-
theintercept.com theintercept.com
-
DHS’s mission to fight disinformation, stemming from concerns around Russian influence in the 2016 presidential election, began taking shape during the 2020 election and over efforts to shape discussions around vaccine policy during the coronavirus pandemic. Documents collected by The Intercept from a variety of sources, including current officials and publicly available reports, reveal the evolution of more active measures by DHS. According to a draft copy of DHS’s Quadrennial Homeland Security Review, DHS’s capstone report outlining the department’s strategy and priorities in the coming years, the department plans to target “inaccurate information” on a wide range of topics, including “the origins of the COVID-19 pandemic and the efficacy of COVID-19 vaccines, racial justice, U.S. withdrawal from Afghanistan, and the nature of U.S. support to Ukraine.”
DHS pivots as "war on terror" winds down
The U.S. Department of Homeland Security pivots from externally-focused terrorism to domestic social media monitoring.
-
- Oct 2022
-
Local file Local file
-
A recent writer has called attention to apassage in Paxson's presidential address before the American Historical Associationin 1938, in which he remarked that historians "needed Cheyney's warning . . . not towrite in 1917 or 1918 what might be regretted in 1927 and 1928."
There are lessons in Frederic L. Paxson's 1938 address to the American Historical Association for todays social media culture and the growing realm of cancel culture when he remarked that historians "needed Cheyney's warning... not to write in 1917 or 1918 what might be regretted in 1927 and 1928.
-
-
netzpolitik.org netzpolitik.org
-
Mastodon
https://mastodon.social/
-
-
www.theverge.com www.theverge.com
-
Running Twitter is more complicated than you think.
-
-
glasp.co glasp.co
-
Glasp is a startup competitor in the annotations space that appears to be a subsidiary web-based tool and response to a large portion of the recent spate of note taking applications.
Some of the first users and suggested users are names I recognize from this tools for thought space.
On first blush it looks like it's got a lot of the same features and functionality as Hypothes.is, but it also appears to have some slicker surfaces and user interface as well as a much larger emphasis on the social aspects (followers/following) and gamification (graphs for how many annotations you make, how often you annotate, streaks, etc.).
It could be an interesting experiment to watch the space and see how quickly it both scales as well as potentially reverts to the mean in terms of content and conversation given these differences. Does it become a toxic space via curation of the social features or does it become a toxic intellectual wasteland when it reaches larger scales?
What will happen to one's data (it does appear to be a silo) when the company eventually closes/shuts down/acquihired/other?
The team behind it is obviously aware of Hypothes.is as one of the first annotations presented to me is an annotation by Kei, a cofounder and PM at the company, on the Hypothes.is blog at: https://web.hypothes.is/blog/a-letter-to-marc-andreessen-and-rap-genius/
But this is true for Glasp. Science researchers/writers use it a lot on our service, too.—Kei
cc: @dwhly @jeremydean @remikalir
-
-
interaksyon.philstar.com interaksyon.philstar.com
-
Edgerly noted that disinformation spreads through two ways: The use of technology and human nature.Click-based advertising, news aggregation, the process of viral spreading and the ease of creating and altering websites are factors considered under technology.“Facebook and Google prioritize giving people what they ‘want’ to see; advertising revenue (are) based on clicks, not quality,” Edgerly said.She noted that people have the tendency to share news and website links without even reading its content, only its headline. According to her, this perpetuates a phenomenon of viral spreading or easy sharing.There is also the case of human nature involved, where people are “most likely to believe” information that supports their identities and viewpoints, Edgerly cited.“Vivid, emotional information grabs attention (and) leads to more responses (such as) likes, comments, shares. Negative information grabs more attention than (the) positive and is better remembered,” she said.Edgerly added that people tend to believe in information that they see on a regular basis and those shared by their immediate families and friends.
Spreading misinformation and disinformation is really easy in this day and age because of how accessible information is and how much of it there is on the web. This is explained precisely by Edgerly. Noted in this part of the article, there is a business for the spread of disinformation, particularly in our country. There are people who pay what we call online trolls, to spread disinformation and capitalize on how “chronically online” Filipinos are, among many other factors (i.e., most Filipinos’ information illiteracy due to poverty and lack of educational attainment, how easy it is to interact with content we see online, regardless of its authenticity, etc.). Disinformation also leads to misinformation through word-of-mouth. As stated by Edgerly in this article, “people tend to believe in information… shared by their immediate families and friends”; because of people’s human nature to trust the information shared by their loved ones, if one is not information literate, they will not question their newly received information. Lastly, it most certainly does not help that social media algorithms nowadays rely on what users interact with; the more that a user interacts with a certain information, the more that social media platforms will feed them that information. It does not help because not all social media websites have fact checkers and users can freely spread disinformation if they chose to.
-
-
www.cits.ucsb.edu www.cits.ucsb.edu
-
Trolls, in this context, are humans who hold accounts on social media platforms, more or less for one purpose: To generate comments that argue with people, insult and name-call other users and public figures, try to undermine the credibility of ideas they don’t like, and to intimidate individuals who post those ideas. And they support and advocate for fake news stories that they’re ideologically aligned with. They’re often pretty nasty in their comments. And that gets other, normal users, to be nasty, too.
Not only programmed accounts are created but also troll accounts that propagate disinformation and spread fake news with the intent to cause havoc on every people. In short, once they start with a malicious comment some people will engage with the said comment which leads to more rage comments and disagreements towards each other. That is what they do, they trigger people to engage in their comments so that they can be spread more and produce more fake news. These troll accounts usually are prominent during elections, like in the Philippines some speculates that some of the candidates have made troll farms just to spread fake news all over social media in which some people engage on.
-
So, bots are computer algorithms (set of logic steps to complete a specific task) that work in online social network sites to execute tasks autonomously and repetitively. They simulate the behavior of human beings in a social network, interacting with other users, and sharing information and messages [1]–[3]. Because of the algorithms behind bots’ logic, bots can learn from reaction patterns how to respond to certain situations. That is, they possess artificial intelligence (AI).
In all honesty, since I don't usually dwell on technology, coding, and stuff. I thought when you say "Bot" it is controlled by another user like a legit person, never knew that it was programmed and created to learn the usual patterns of posting of some people may be it on Twitter, Facebook, and other social media platforms. I think it is important to properly understand how "Bots" work to avoid misinformation and disinformation most importantly during this time of prominent social media use.
-
- Sep 2022
-
www.scientificamerican.com www.scientificamerican.com
-
Good overview article of some of the psychology research behind misinformation in social media spaces including bots, AI, and the effects of cognitive bias.
Probably worth mining the story for the journal articles and collecting/reading them.
-
Bots can also accelerate the formation of echo chambers by suggesting other inauthentic accounts to be followed, a technique known as creating “follow trains.”
-
We observed an overall increase in the amount of negative information as it passed along the chain—known as the social amplification of risk.
Could this be linked to my FUD thesis about decisions based on possibilities rather than realities?
-
We confuse popularity with quality and end up copying the behavior we observe.
Popularity ≠ quality in social media.
-
“Limited individual attention and online virality of low-quality information,” By Xiaoyan Qiu et al., in Nature Human Behaviour, Vol. 1, June 2017
The upshot of this paper seems to be "information overload alone can explain why fake news can become viral."
-
Running this simulation over many time steps, Lilian Weng of OSoMe found that as agents' attention became increasingly limited, the propagation of memes came to reflect the power-law distribution of actual social media: the probability that a meme would be shared a given number of times was roughly an inverse power of that number. For example, the likelihood of a meme being shared three times was approximately nine times less than that of its being shared once.
-
One of the first consequences of the so-called attention economy is the loss of high-quality information.
In the attention economy, social media is the equivalent of fast food. Just like going out for fine dining or even healthier gourmet cooking at home, we need to make the time and effort to consume higher quality information sources. Books, journal articles, and longer forms of content with more editorial and review which take time and effort to produce are better choices.
Tags
- quality
- follow trains
- food
- memes
- fear uncertainty and doubt
- cognitive bias
- echo chambers
- virality
- power-law probability distributions
- bots
- social media
- reading practices
- analogies
- psychology
- social amplification of risk
- FUD
- read
- decisions based on possibilities rather than realities
- popularity
- artificial intelligence
- risk
- neologisms
- information overload
- misinformation
- attention
- attention economy
Annotators
URL
-
-
mleddy.blogspot.com mleddy.blogspot.com
-
https://mleddy.blogspot.com/2005/05/tools-for-serious-readers.html
Interesting (now discontinued) reading list product from Levenger that in previous generations may have been covered by a commonplace book but was quickly replaced by digital social products (bookmark applications or things like Goodreads.com or LibraryThing.com).
Presently I keep a lot of this sort of data digitally myself using either/both: Calibre or Zotero.
-
- Aug 2022
-
-
Indie sites can’t complete with that. And what good is hosting and controlling your own content if no one else looks at it? I’m driven by self-satisfaction and a lifelong archivist mindset, but others may not be similarly inclined. The payoffs here aren’t obvious in the short-term, and that’s part of the problem. It will only be when Big Social makes some extremely unpopular decision or some other mass exodus occurs that people lament about having no where else to go, no other place to exist. IndieWeb is an interesting movement, but it’s hard to find mentions of it outside of hippie tech circles. I think even just the way their “Getting Started” page is presented is an enormous barrier. A layperson’s eyes will 100% glaze over before they need to scroll. There is a lot of weird jargon and in-joking. I don’t know how to fix that either. Even as someone with a reasonably technical background, there are a lot of components of IndieWeb that intimidate me. No matter the barriers we tear down, it will always be easier to just install some app made by a centralised platform.
-
-
-
-
We’re trapped in a Never-Ending Now — blind to history, engulfed in the present moment, overwhelmed by the slightest breeze of chaos. Here’s the bottom line: You should prioritize the accumulated wisdom of humanity over what’s trending on Twitter.
Recency bias and social media will turn your daily inputs into useless, possibly rage-inducing, information.
-
-
securingdemocracy.gmfus.org securingdemocracy.gmfus.org
-
Schafer, B. (2021, October 5). RT Deutsch Finds a Home with Anti-Vaccination Skeptics in Germany. Alliance For Securing Democracy. https://securingdemocracy.gmfus.org/rt-deutsch-youtube-antivaccination-germany/
-
-
www.thegamer.com www.thegamer.com
-
Bevan, R. (2022, February 27). Discord Bans Covid-19 And Vaccine Misinform
-