91 Matching Annotations
  1. Sep 2023
    1. Zeynep Tufekci recently wrote in the Times. “YouTube may be one of the most powerful radicalizing instruments of the 21st century.”
    2. “Even the creators don’t always understand why it recommends one video instead of another,” says Guillaume Chaslot, an ex-YouTube engineer who worked on the site’s algorithm.
    3. According to YouTube chief product officer Neal Mohan, 70 percent of views on YouTube are from recommendations—so the site’s algorithms are largely responsible for amplifying RT’s propaganda hundreds of millions of times.
    4. These algorithms are invisible, but they have an outsized impact on shaping individuals’ experience online and society at large.
  2. Mar 2023
    1. he gained popularity, particularly among young men, by promoting what he presented as a hyper-masculine, ultra-luxurious lifestyle.

      Andrew Tate, a former kickboxer and Big Brother (17, UK) housemate, has gained popularity among young men for promoting a "hyper-masculine, ultra-luxurious lifestyle".

      Where does Tate fit into the pantheon of the prosperity gospel? Is he touching on it or extending it to the nth degree? How much of his audience overlaps with the religious right that would internalize such a viewpoint?

  3. Feb 2023
    1. TikTok offers an online resource center for creators seeking to learn more about its recommendation systems, and has opened multiple transparency and accountability centers where guests can learn how the app’s algorithm operates.

      There seems to be a number of issues with the positive and negative feedback systems these social media companies are trying to create. What are they really measuring? The either aren't measuring well or aren't designing well (or both?)...

    2. Is algorithmic content moderation creating a new sort of cancel culture online?

    3. Unlike other mainstream social platforms, the primary way content is distributed on TikTok is through an algorithmically curated “For You” page; having followers doesn’t guarantee people will see your content. This shift has led average users to tailor their videos primarily toward the algorithm, rather than a following, which means abiding by content moderation rules is more crucial than ever.

      Social media has slowly moved away from communication between people who know each other to people who are farther apart in social spaces. Increasingly in 2021 onward, some platforms like TikTok have acted as a distribution platform and ignored explicit social connections like follower/followee in lieu of algorithmic-only feeds to distribute content to people based on a variety of criteria including popularity of content and the readers' interests.

  4. Jan 2023
    1. Every day, thousands of strangers upload little slices of their consciousness directly into my mind. My concern is that I'm prone to mistake their thoughts for my own — that some part of me believes I'm only hearing myself think.

      Letting others think for us -- groupthink -- though recognition that as social animals it is important to us to know how others think; problem of any type of feed, even RSS, is telling us what to be thinking about -- thoughtful curation of sources vital

    1. Just added RSS links to many of my categories. They’ve always existed, I just figured it would be nice to include them in the category description.

      This is a very good idea. All WordPress sites output feeds for categories and tags by default. I will borrow this for categories and tags for which I regularly produce enough articles for them to merit individual subscriptions for interested readers.

  5. Dec 2022
    1. Alas, lawmakers are way behind the curve on this, demanding new "online safety" rules that require firms to break E2E and block third-party de-enshittification tools: https://www.openrightsgroup.org/blog/online-safety-made-dangerous/ The online free speech debate is stupid because it has all the wrong focuses: Focusing on improving algorithms, not whether you can even get a feed of things you asked to see; Focusing on whether unsolicited messages are delivered, not whether solicited messages reach their readers; Focusing on algorithmic transparency, not whether you can opt out of the behavioral tracking that produces training data for algorithms; Focusing on whether platforms are policing their users well enough, not whether we can leave a platform without losing our important social, professional and personal ties; Focusing on whether the limits on our speech violate the First Amendment, rather than whether they are unfair: https://doctorow.medium.com/yes-its-censorship-2026c9edc0fd

      This list is particularly good.


      Proper regulation of end to end services would encourage the creation of filtering and other tools which would tend to benefit users rather than benefit the rent seeking of the corporations which own the pipes.

    2. But there's another side to this playlistification of feeds: playlists and other recommendation algorithms are chokepoints: they are a way to durably interpose a company between a creator and their audience. Where you have chokepoints, you get chokepoint capitalism: https://chokepointcapitalism.com/

      Massive social media networks using algorithmic feeds and other programmatic and centralizing methods to interpose themselves between people trying to reach each other, often in ways which allow them to extract additional value from the participants. They become necessary platforms which create chokepoints for flows of information which Cory Doctorow and Rebecca Giblin call "chokepoint capitalism".

  6. Oct 2022
    1. Honestly, the lack of support for RSS is one of the reasons I'm disinclined to embrace the standards they promote. Maybe if I understood why this is the case I'd feel better about seeing myself as part of this group.

      I agree that Indieweb focus sites should provide and promote RSS/Atom feeds. However, I do not see a tension with embracing some Indieweb projects (e.g., microformats, webmentions) without being a part of the Indieweb community.

    1. In the meeting, however, I asked how the attendees expected people to keep up with site updates without some type of feed to monitor. Aaron’s response was that more people needed to adopt microformats. I said that this was a “boil the ocean” strategy and that people who use feeds to monitor sites expect to use RSS and Atom, not microformats.

      I agree 100% with the author here. As I opined in my own article about gaming on Linux, I opined that you have to meet people where they are. To the extent people curate their own reading list, they use RSS/Atom readers, not microformats readers. Trying to force the adoption of microformats readers will only lead to people who rely on RSS/Atom ignoring microformats-only sites.

    2. What I found in looking at other Indieweb-type sites was that they did not have any RSS feed for posts. Specifically, the two co-founders, Aaron Parecki and Tantek Celik, did not have feeds available for their sites. In the next meeting I attended, I brought this up. The response was that they were using microformats to encode data within their websites, and that there were microformat parsers which could read that formatted data and present it in a feed reader application.

      Microformats are neat and I an interested in their potential to add social functionality to individual websites. However, microformat parsers are far-less used than RSS/Atom feed readers, and there is too little awareness of RSS/Atom readers as it is.

    3. RSS and podcasting are a crucial part of what I call (and others have called) the “independent web” (websites and web presences that are not part of a silo like Twitter, Facebook, etc, where people own their data and control it (also an IndieWeb principle)). The two areas (IndieWeb and independent web) share some features, but in my opinion, should not be considered “the same” – there are differences.

      I agree fully. I think that aspects of the IndieWeb, potentially Webmentions, have great potential and should be used by more sites. But RSS/ATOM feeds are an essential way of consuming content, regardless of whether the reader maintains his or her own site.

  7. Sep 2022
    1. Our plan is to bring the best of old-school blogging to a modern news feed experience and to have our editors and senior reporters constantly updating the site with the best of tech and science news from around the entire internet. If that means linking out to Wired or Bloomberg or some other news source, that’s great — we’re happy to send people to excellent work elsewhere, and we trust that our feed will be useful enough to have you come back later. If that means we just need to embed the viral TikTok or wacky CEO tweet and move on, so be it — we can do that

      Chainverse can play a role in personalization/recommendations for feed first platforms. Perhaps we can create our own feed.

  8. Jul 2022
    1. https://herman.bearblog.dev/a-better-ranking-algorithm/

    2. I removed the in-feed upvote button, making posts only up-votable at the bottom of the post itself. This increases the vote quality (if not the quantity).

      Putting upvoting at the bottom of a post is a better indicator of quality than at the top where it's less likely to have been read and more of a knee-jerk reactions, particularly for the punch-the-monkey crowd.

      Similar to how I use read, listen, and watch posts.

    3. use the number of views as a factor in determining the score of a post. Essentially using the “upvote rate” instead of pure upvote number. Score = (Upvotes - Views) / (Time in hours + 4) ^ Gravity Or removing time from consideration entirely, decaying posts the more they are read. Score = Upvotes / Views ^ Gravity These, however, have a bias against longer posts (although this is the case with the all these algorithms in general, but exacerbated here). Since longer posts (by virtue of them being long) may take time to read or are saved to read later, and may get a lot of views initially which actually degrade the score despite people finding the content valuable. It’s an interesting idea though and could be used for platforms where all posts are of a similar digestibility.

      On platforms where the content takes roughly the same amount of time to consume, one can factor in the number of views versus upvotes as a quality indicator. One needs to be more careful with longer form content though as length will tend to decrease readership and clicks and potentially push people to "bookmark" to read later. How to account for these?

      What are the list of variables in this overall problem?

    4. I dislike the separation of Trending and Newest. This is one of the main reasons for false negatives as new articles don’t receive many (if any) views. I’m thinking about randomly interspersing new articles in the trending feed to give them the potential of getting their first few votes. This (as ever) has an effect on quality, so has to be done with care.

      Introducing some randomness for new unranked articles is an interesting and likely useful tactic.

    5. The Hacker News algorithm This algorithm is fairly straight forward (although there’s some magic going on under the surface when it comes to moderation, shadow banning, and post pinning). This is slightly simplified, but in a nutshell: Score = Upvotes / Time since submission ^ Gravity Where Gravity = 1.7 As upvotes accumulate the score rises but is counterbalanced by the time since submission. Gravity makes it exponentially more difficult to rank as time goes by.

      short outline of the Hacker News algorithm.

    6. Once a post goes viral on Twitter, Hacker News, Reddit, or anywhere else off-platform, it has the potential to form a “Katamari ball” where it gets upvotes because it has upvotes (which means it gets more upvotes, because it has more upvotes, which means…well…you get it). This is also known as "the network effect", but I feel a Katamari ball better illustrates it.

      Network effects can describe a broad variety of phenomenon. Is Katamari ball a better descriptor of this specific phenomenon?

      How does one prioritize the richer quality Lindy library material that may be even more beneficial than things which are simply new?

    7. The most common way is to log the number of upvotes (or likes/downvotes/angry-faces/retweets/poop-emojis/etc) and algorithmically determine the quality of a post by consensus.

      When thinking about algorithmic feeds, one probably ought to not include simple likes/favorites/bookmarks as they're such low hanging fruit. Better indicators are interactions which take time, effort, work to post.

      Using various forms of webmention as indicators could be interesting as one can parse responses and make an actual comment worth more than a dozen "likes", for example.

      Curating people (who respond) as well as curating the responses themselves could be useful.

      Time windowing curation of people and curators could be a useful metric.

      Attempting to be "democratic" in these processes may often lead to the Harry and Mary Beercan effect and gaming issues seen in spaces like Digg or Twitter and have dramatic consequences for the broader readership and community. Democracy in these spaces is more likely to get you cat videos and vitriol with a soupçon of listicles and clickbait.

  9. Jun 2022
    1. You can also use get a feed directly with a username: https://www.youtube.com/feeds/videos.xml?user=<username> The one I use most is the one for playlists (if creators remember to use them). https://www.youtube.com/feeds/videos.xml?playlist_id=<playlist id>

      bash $ curl -s "https://www.youtube.com/c/TED" | grep -Po '"channelId":".+?"' | cut -d \" -f 4 | while read -r YTID ; do echo -n "Youtube-ID: $YTID " ; curl -s "https://www.youtube.com/feeds/videos.xml?channel_id=$YTID" | grep -m 1 -P -o "(?<=<title>).+(?=</title>)" ; done Youtube-ID: UCsT0YIqwnpJCM-mx7-gSA4Q TEDx Talks Youtube-ID: UCsooa4yRKGN_zEE8iknghZA TED-Ed Youtube-ID: UCy9b8cNJQmxX8Y2bdE6mQNw TED Audio Collective Youtube-ID: UCBjBZmguQzn6WCYR7DQykLw TED Fellow Youtube-ID: UC6riU7xaRItrH_8_P6rYsfQ TED Institute Youtube-ID: UCV8cNYL6xPS-MxkQ2BTsX6Q TEDPrizeChannel Youtube-ID: UCDAdYdnCDt9zx3p3e_78lEQ TED Ideas Studio Youtube-ID: UC-yTB2bUcin9mmah36sXiYA TEDxYouth Youtube-ID: UCAuUUnT6oDeKwE6v1NGQxug TED

  10. May 2022
    1. This came in the context of weighing what she stood to gain and lose in leaving a staff job at BuzzFeed. She knew the worth of what editors, fact-checkers, designers, and other colleagues brought to a piece of writing. At the same time, she was tired of working around the “imperatives of social media sharing.” Clarity and concision are not metrics imposed by the Facebook algorithm, of course — but perhaps such concerns lose some of their urgency when readers have already pledged their support.

      Continuing with the idea above about the shift of Sunday morning talk shows and the influence of Hard Copy, is social media exerting a negative influence on mainstream content and conversation as a result of their algorithmic gut reaction pressure? How can we fight this effect?

    1. “It was 2017, I would say, when Twitter started really cracking down on bots in a way that they hadn’t before — taking down a lot of bad bots, but also taking down a lot of good bots too. There was an appeals process [but] it was very laborious, and it just became very difficult to maintain stuff. And then they also changed all their API’s, which are the programmatic interface for how a bot talks to Twitter. So they changed those without really any warning, and everything broke.

      Just like chilling action by political actors, social media corporations can use changes in policy and APIs to stifle and chill speech online.

      This doesn't mean that there aren't bad actors building bots to actively cause harm, but there is a class of potentially helpful and useful bots (tools) that can make a social space better or more interesting.

      How does one regulate this sort of speech? Perhaps the answer is simply not to algorithmically amplify these bots and their speech over that of humans.

      More and more I think that the answer is to make online social interactions more like in person interactions. Too much social media is giving an even bigger bullhorn to the crazy preacher on the corner of Main Street who was shouting at the crowds that simply ignored them. Social media has made it easier for us to shout them back down, and in doing so, we're only making them heard by more. We need a negative feedback mechanism to dampen these effects the same way they would have happened online.

    2. He and his fellow bot creators had been asking themselves over the years, “what do we do when the platform [Twitter] becomes unfriendly for bots?”

      There's some odd irony in this quote. Kazemi indicates that Twitter was unfriendly for bots, but he should be specific that it's unfriendly for non-corporately owned bots. One could argue that much of the interaction on Twitter is spurred by the primary bot on the service: the algorithmic feed (bot) that spurs people to like, retweet, and interact with more content and thus keeping them on the platform for longer.

  11. Apr 2022
    1. Since most of our feeds rely on either machine algorithms or human curation, there is very little control over what we actually want to see.

      While algorithmic feeds and "artificial intelligences" might control large swaths of what we see in our passive acquisition modes, we can and certainly should spend more of our time in active search modes which don't employ these tools or methods.

      How might we better blend our passive and active modes of search and discovery while still having and maintaining the value of serendipity in our workflows?

      Consider the loss of library stacks in our research workflows? We've lost some of the serendipity of seeing the book titles on the shelf that are adjacent to the one we're looking for. What about the books just above and below it? How do we replicate that sort of serendipity into our digital world?

      How do we help prevent the shiny object syndrome? How can stay on task rather than move onto the next pretty thing or topic presented to us by an algorithmic feed so that we can accomplish the task we set out to do? Certainly bookmarking a thing or a topic for later follow up can be useful so we don't go too far afield, but what other methods might we use? How can we optimize our random walks through life and a sea of information to tie disparate parts of everything together? Do we need to only rely on doing it as a broader species? Can smaller subgroups accomplish this if carefully planned or is exploring the problem space only possible at mass scale? And even then we may be under shooting the goal by an order of magnitude (or ten)?

    2. We have to endlessly scroll and parse a ton of images and headlines before we can find something interesting to read.

      The randomness of interesting tidbits in a social media scroll help to put us in a state of flow. We get small hits of dopamine from finding interesting posts to fill in the gaps of the boring bits in between and suddenly find we've lost the day. As a result an endless scroll of varying quality might have the effect of making one feel productive when in fact a reasonably large proportion of your time is spent on useless and uninteresting content.

      This effect may be put even further out when it's done algorithmically and the dopamine hits become more frequent. Potentially worse than this, the depth of the insight found in most social feeds is very shallow and rarely ever deep. One is almost never invited to delve further to find new insights.


      How might a social media stream of content be leveraged to help people read more interesting and complex content? Could putting Jacques Derrida's texts into a social media-like framing create this? Then one could reply to the text by sentence or paragraph with their own notes. This is similar to the user interface of Hypothes.is, but Hypothes.is has a more traditional reading interface compared to the social media space. What if one interspersed multiple authors in short threads? What other methods might work to "trick" the human mind into having more fun and finding flow in their deeper and more engaged reading states?

      Link this to the idea of fun in Sönke Ahrens' How to Take Smart Notes.

    3. …and they are typically sorted: chronologically: newest items are displayed firstthrough data: most popular, trending, votesalgorithmically: the system determines what you see through your consumption patterns and what it wants you to seeby curation: humans determine what you seeby taxonomy: content is displayed within buckets of categories, like Wikipedia Most media entities employ a combination of the above.

      For reading richer, denser texts what is the best way of ordering and sorting it?

      Algorithmically sorting with a pseudo-chronological sort is the best method for social media content, but what is the most efficient method for journal articles? for books?

  12. Mar 2022
    1. First is that it actually lowers paid acquisition costs. It lowers them because the Facebook Ads algorithm rewards engaging advertisements with lower CPMs and lots of distribution. Facebook does this because engaging advertisements are just like engaging posts: they keep people on Facebook. 

      Engaging advertisements on Facebook benefit from lower acquisition costs because the Facebook algorithm rewards more interesting advertisements with lower CPMs and wider distribution. This is done, as all things surveillance capitalism driven, to keep eyeballs on Facebook.

      This isn't too dissimilar to large cable networks that provide free high quality advertising to mass manufacturers in late night slots. The network generally can't sell all of their advertising inventory, particularly in low viewing hours, so they'll offer free or incredibly cheap commercial rates to their bigger buyers (like Coca-Cola or McDonalds, for example) to fill space and have more professional looking advertisements between the low quality advertisements from local mom and pop stores and the "as seen on TV" spots. These higher quality commercials help keep the audience engaged and prevents viewers from changing the channel.

    1. Posting a new algorithm, poem, or video on the web makes it a vailable, but unless appropriate recipients notice it, the originator has little chance to influence them.

      An early statement of the problem of distribution which has been widely solved by many social media algorithmic feeds. Sadly pushing ideas to people interested in them (or not) doesn't seem to have improved humanity. Perhaps too much of the problem space with respect to the idea of "influence" has been devoted to marketing and commerce or to fringe misinformation spaces? How might we create more value to the "middle" of the populace while minimizing misinformation and polarization?

    2. The current mass media such as t elevision, books, and magazines are one-directional, and are produced by a centralized process. This can be positive, since respected editors can filter material to ensure consistency and high quality, but more widely accessible narrowcasting to specific audiences could enable livelier decentralized discussions. Democratic processes for presenting opposing views, caucusing within factions, and finding satisfactory compromises are productive for legislative, commercial, and scholarly pursuits.

      Social media has to some extent democratized the access to media, however there are not nearly enough processes for creating negative feedback to dampen ideas which shouldn't or wouldn't have gained footholds in a mass society.

      We need more friction in some portions of the social media space to prevent the dissemination of un-useful, negative, and destructive ideas swamping out the positive ones. The accelerative force of algorithmic feeds for the most extreme ideas in particular is one of the most caustic ideas of the last quarter of a century.

  13. Feb 2022
    1. https://dancohen.org/2019/07/23/engagement-is-the-enemy-of-serendipity/

      Dan Cohen talks about a design change in the New York Times app that actively discourages exploration and discovery by serendipity.

      This is similar to pulling out digital copies of books you're looking for instead of going to the library, tracking down the book on the shelf and in the process seeing and experiencing the books on the shelf which are nearby, or even the book that catches your eye across the aisle, wasn't in your sphere of search or interest, but you pick it up anyway.

      How can we bring this sort of design back to digital experiences?

      It's not just the algorithmic feeds which are narrowing our interests and exposure, but the design of our digital spaces as well.

  14. Jan 2022
  15. Oct 2021
    1. https://www.theatlantic.com/ideas/archive/2021/10/facebook-papers-democracy-election-zuckerberg/620478/

      Adrienne LaFrance outlines the reasons we need to either abandon Facebook or cause some more extreme regulation of it and how it operates.

      While she outlines the ills, she doesn't make a specific plea about the solution of the problem. There's definitely a raging fire in the theater, but no one seems to know what to do about it. We're just sitting here watching the structure burn down around us. We need clearer plans for what must be done to solve this problem.

    2. An internal message characterizing Zuckerberg’s reasoning says he wanted to avoid new features that would get in the way of “meaningful social interactions.” But according to Facebook’s definition, its employees say, engagement is considered “meaningful” even when it entails bullying, hate speech, and reshares of harmful content.

      Meaningful social interactions don't need algorithmic help.

    3. At the time, Facebook was already weighting the reactions other than “like” more heavily in its algorithm—meaning posts that got an “angry” reaction were more likely to show up in users’ News Feeds than posts that simply got a “like.” Anger-inducing content didn’t spread just because people were more likely to share things that made them angry; the algorithm gave anger-inducing content an edge. Facebook’s Integrity workers—employees tasked with tackling problems such as misinformation and espionage on the platform—concluded that they had good reason to believe targeting posts that induced anger would help stop the spread of harmful content.
    4. Facebook has dismissed the concerns of its employees in manifold ways. One of its cleverer tactics is to argue that staffers who have raised the alarm about the damage done by their employer are simply enjoying Facebook’s “very open culture,” in which people are encouraged to share their opinions, a spokesperson told me.
      1. Share opinions
      2. Opinions viewed as "fact"
      3. "Facts" spread as news.
      4. Platform accelerates "news".
      5. Bad things happen
      6. Profit
  16. Sep 2021
    1. Kevin Marks talks about the bridging of new people into one's in-group by Twitter's retweet functionality from a positive perspective.

      He doesn't foresee the deleterious effects of algorithms for engagement doing just the opposite of increasing the volume of noise based on one's in-group hating and interacting with "bad" content in the other direction. Some of these effects may also be bad from a slow brainwashing perspective if not protected for.

  17. Aug 2021
    1. Fukuyama's answer is no. Middleware providers will not see privately shared content from a user's friends. This is a good answer if our priority is privacy. It lets my cousin decide which companies to trust with her sensitive personal information. But it hobbles middleware as a tool for responding to her claims about vaccines. And it makes middleware providers far less competitive, since they will not be able to see much of the content we want them to curate.

      Is it alright to let this sort of thing go on the smaller scale personal shared level? I would suggest that the issue is not this small scale conversation which can happen linearly, but we need to focus on the larger scale amplification of misinformation by sources. Get rid of the algorithmic amplification of the fringe bits which is polarizing and toxic. Only allow the amplification of the more broadly accepted, fact-based, edited, and curated information.

    2. Facebook deploys tens of thousands of people to moderate user content in dozens of languages. It relies on proprietary machine-learning and other automated tools, developed at enormous cost. We cannot expect [End Page 169] comparable investment from a diverse ecosystem of middleware providers. And while most providers presumably will not handle as much content as Facebook does, they will still need to respond swiftly to novel and unpredictable material from unexpected sources. Unless middleware services can do this, the value they provide will be limited, as will users' incentives to choose them over curation by the platforms themselves.

      Does heavy curation even need to exist? If a social company were able to push a linear feed of content to people without the algorithmic forced engagement, then the smaller, fringe material wouldn't have the reach. The majority of the problem would be immediately solved with this single feature.

    3. The First Amendment precludes lawmakers from forcing platforms to take down many kinds of dangerous user speech, including medical and political misinformation.

      Compare social media with the newspaper business from this perspective.

      People joined social media not knowing the end effects, but now don't have a choice of platform after-the-fact. Social platforms accelerate the disinformation using algorithms.

      Because there is choice amongst newspapers, people can easily move and if they'd subscribed to a racist fringe newspaper, they could easily end their subscription and go somewhere else. This is patently not the case for any social media. There's a high hidden personal cost for connectivity that isn't taken into account. The government needs to regulate this and not the speech portion.

      Social media should be considered a common carrier and considered as such. It was an easier and more logical process in the telephone, electricity and other areas to force this as the cost of implementation for them was magnitudes of order higher. The data formats and storage for social should be standardized (potentially even in three or more formats) and that should be the common carrier imposed. Would this properly skirt the First Amendment issues?

    4. Fukuyama's work, which draws on both competition analysis and an assessment of threats to democracy, joins a growing body of proposals that also includes Mike Masnick's "protocols not platforms," Cory Doctorow's "adversarial interoperability," my own "Magic APIs," and Twitter CEO Jack Dorsey's "algorithmic choice."

      Nice overview of work in the space for fixing monopoly in social media space the at the moment. I hadn't heard about Fukuyama or Daphne Keller's versions before.

      I'm not sure I think Dorsey's is actually a thing. I suspect it is actually vaporware from the word go.

      IndieWeb has been working slowly at the problem as well.

  18. Jul 2021
    1. One of the reasons for this situation is that the very media we have mentioned are so designed as to make thinking seem unnecessary (though this is only an appearance). The packag­ing of intellectual positions and views is one of the most active enterprises of some of the best minds of our day. The viewer of television, the listener to radio, the reader of magazines, is presented with a whole complex of elements-all the way from ingenious rhetoric to carefully selected data and statistics-to make it easy for him to "make up his own mind" with the mini­mum of difficulty and effort. But the packaging is often done so effectively that the viewer, listener, or reader does not make up his own mind at all. Instead, he inserts a packaged opinion into his mind, somewhat like inserting a cassette into a cassette player. He then pushes a button and "plays back" the opinion whenever it seems appropriate to do so. He has performed ac­ceptably without having had to think.

      This is an incredibly important fact. It's gone even further with additional advances in advertising and social media not to mention the slow drip mental programming provided by algorithmic feeds which tend to polarize their readers.

      People simply aren't actively reading their content, comparing, contrasting, or even fact checking it.

      I suspect that this book could use an additional overhaul to cover many of these aspects.

  19. May 2021
    1. This runs counter to the time-based structure of traditional blogs: posts presented in reverse chronological order based on publication date.

      Admittedly many blogs primarily operate on time-based order, but it would be fun if more digital gardens provided a most-recently updated feed of their content.

      This particular article is a case in point. I've read it before in an earlier stage and want to follow updates to it. I can subscribe to Maggie's feed, but currently her most recent post in my reader is dated 3 weeks ago. Without seeing a ping from another service to see the notification, I would have missed the significant update to this piece which has prompted me to re-read it for updates on the ideas contained in it.

      Some platforms like MediaWiki do provide feeds for recently updated. My colleague David Shanske has recently updated a WordPress plugin he built so that it provides WordPress sites with a feed for most recent updates, so that one would see not only new content, but also content which is added or updated from the past. As a result, here's his "updated feed" https://david.shanske.com/updated/feed/ which is cleverly useful.

  20. Apr 2021
    1. Others are asking questions about the politics of weblogs – if it’s a democratic medium, they ask, why are there so many inequalities in traffic and linkage?

      This still exists in the social media space, but has gotten even worse with the rise of algorithmic feeds.

  21. Mar 2021
    1. One person writing a tweet would still qualify for free-speech protections—but a million bot accounts pretending to be real people and distorting debate in the public square would not.

      Do bots have or deserve the right to not only free speech, but free reach?

    1. In this respect, we join Fitzpatrick (2011) in exploring “the extent to which the means of media production and distribution are undergoing a process of radical democratization in the Web 2.0 era, and a desire to test the limits of that democratization”

      Something about this is reminiscent of WordPress' mission to democratize publishing. We can also compare it to Facebook whose (stated) mission is to connect people, while it's actual mission is to make money by seemingly radicalizing people to the extremes of our political spectrum.

      This highlights the fact that while many may look at content moderation on platforms like Facebook as removing their voices or deplatforming them in the case of people like Donald J. Trump or Alex Jones as an anti-democratic move. In fact it is not. Because of Facebooks active move to accelerate extreme ideas by pushing them algorithmically, they are actively be un-democratic. Democratic behavior on Facebook would look like one voice, one account and reach only commensurate with that person's standing in real life. Instead, the algorithmic timeline gives far outsized influence and reach to some of the most extreme voices on the platform. This is patently un-democratic.

    1. In 2017, Chris Cox, Facebook’s longtime chief product officer, formed a new task force to understand whether maximizing user engagement on Facebook was contributing to political polarization. It found that there was indeed a correlation, and that reducing polarization would mean taking a hit on engagement. In a mid-2018 document reviewed by the Journal, the task force proposed several potential fixes, such as tweaking the recommendation algorithms to suggest a more diverse range of groups for people to join. But it acknowledged that some of the ideas were “antigrowth.” Most of the proposals didn’t move forward, and the task force disbanded. Since then, other employees have corroborated these findings. A former Facebook AI researcher who joined in 2018 says he and his team conducted “study after study” confirming the same basic idea: models that maximize engagement increase polarization. They could easily track how strongly users agreed or disagreed on different issues, what content they liked to engage with, and how their stances changed as a result. Regardless of the issue, the models learned to feed users increasingly extreme viewpoints. “Over time they measurably become more polarized,” he says.
    2. “When you’re in the business of maximizing engagement, you’re not interested in truth. You’re not interested in harm, divisiveness, conspiracy. In fact, those are your friends,” says Hany Farid, a professor at the University of California, Berkeley who collaborates with Facebook to understand image- and video-based misinformation on the platform.
  22. Feb 2021
    1. When your digital news feed doesn’t contain links, when it cannot be linked to, when it can’t be indexed, when you can’t copy a paragraph and paste it into another application: when this happens your news feed is not flawed or backwards looking or frustrating. It is broken.

      If your news feed doesn't contain links, can't be linked to, indexed, or copied and pasted, it is broken.

      How can this be tied into the five R's of Open Education Resources: Retain, Reuse, Revise, Remix and/or Redistribute content (and perhaps my Revise/Request update ideas: https://boffosocko.com/2018/08/30/the-sixth-r-of-open-educational-resources-oer/)

    1. Stream presents us with a single, time ordered path with our experience (and only our experience) at the center.

      And even if we are physically next to another person, our experience will be individualized. We don't know what other people see, nor we can be sure we are looking at each other.

  23. Dec 2020
    1. The few people who are willing to defend these sites unconditionally do so from a position of free-speech absolutism. That argument is worthy of consideration. But there’s something architectural about the site that merits attention, too: There are no algorithms on 8kun, only a community of users who post what they want. People use 8kun to publish abhorrent ideas, but at least the community isn’t pretending to be something it’s not. The biggest social platforms claim to be similarly neutral and pro–free speech when in fact no two people see the same feed. Algorithmically tweaked environments feed on user data and manipulate user experience, and not ultimately for the purpose of serving the user. Evidence of real-world violence can be easily traced back to both Facebook and 8kun. But 8kun doesn’t manipulate its users or the informational environment they’re in. Both sites are harmful. But Facebook might actually be worse for humanity.
    2. Every time you click a reaction button on Facebook, an algorithm records it, and sharpens its portrait of who you are.

      It might be argued that the design is not creating a portrait of who you are, but of who Facebook wants you to become. The real question is: Who does Facebook want you to be, and are you comfortable with being that?

  24. Oct 2020
    1. Meanwhile, politicians from the two major political parties have been hammering these companies, albeit for completely different reasons. Some have been complaining about how these platforms have potentially allowed for foreign interference in our elections.3 3. A Conversation with Mark Warner: Russia, Facebook and the Trump Campaign, Radio IQ|WVTF Music (Apr. 6, 2018), https://www.wvtf.org/post/conversation-mark-warner-russia-facebook-and-trump-campaign#stream/0 (statement of Sen. Mark Warner (D-Va.): “I first called out Facebook and some of the social media platforms in December of 2016. For the first six months, the companies just kind of blew off these allegations, but these proved to be true; that Russia used their social media platforms with fake accounts to spread false information, they paid for political advertising on their platforms. Facebook says those tactics are no longer allowed—that they've kicked this firm off their site, but I think they've got a lot of explaining to do.”). Others have complained about how they’ve been used to spread disinformation and propaganda.4 4. Nicholas Confessore & Matthew Rosenberg, Facebook Fallout Ruptures Democrats’ Longtime Alliance with Silicon Valley, N.Y. Times (Nov. 17, 2018), https://www.nytimes.com/2018/11/17/technology/facebook-democrats-congress.html (referencing statement by Sen. Jon Tester (D-Mont.): “Mr. Tester, the departing chief of the Senate Democrats’ campaign arm, looked at social media companies like Facebook and saw propaganda platforms that could cost his party the 2018 elections, according to two congressional aides. If Russian agents mounted a disinformation campaign like the one that had just helped elect Mr. Trump, he told Mr. Schumer, ‘we will lose every seat.’”). Some have charged that the platforms are just too powerful.5 5. Julia Carrie Wong, #Breaking Up Big Tech: Elizabeth Warren Says Facebook Just Proved Her Point, The Guardian (Mar. 11, 2019), https://www.theguardian.com/us-news/2019/mar/11/elizabeth-warren-facebook-ads-break-up-big-tech (statement of Sen. Elizabeth Warren (D-Mass.)) (“Curious why I think FB has too much power? Let's start with their ability to shut down a debate over whether FB has too much power. Thanks for restoring my posts. But I want a social media marketplace that isn't dominated by a single censor. #BreakUpBigTech.”). Others have called attention to inappropriate account and content takedowns,6 6. Jessica Guynn, Ted Cruz Threatens to Regulate Facebook, Google and Twitter Over Charges of Anti-Conservative Bias, USA Today (Apr. 10, 2019), https://www.usatoday.com/story/news/2019/04/10/ted-cruz-threatens-regulate-facebook-twitter-over-alleged-bias/3423095002/ (statement of Sen. Ted Cruz (R-Tex.)) (“What makes the threat of political censorship so problematic is the lack of transparency, the invisibility, the ability for a handful of giant tech companies to decide if a particular speaker is disfavored.”). while some have argued that the attempts to moderate discriminate against certain political viewpoints.

      Most of these problems can all fall under the subheading of the problems that result when social media platforms algorithmically push or accelerate content on their platforms. An individual with an extreme view can publish a piece of vile or disruptive content and because it's inflammatory the silos promote it which provides even more eyeballs and the acceleration becomes a positive feedback loop. As a result the social silo benefits from engagement for advertising purposes, but the community and the commons are irreparably harmed.

      If this one piece were removed, then the commons would be much healthier, fringe ideas and abuse that are abhorrent to most would be removed, and the broader democratic views of the "masses" (good or bad) would prevail. Without the algorithmic push of fringe ideas, that sort of content would be marginalized in the same way we want our inane content like this morning's coffee or today's lunch marginalized.

      To analogize it, we've provided social media machine guns to the most vile and fringe members of our society and the social platforms are helping them drag the rest of us down.

      If all ideas and content were provided the same linear, non-promotion we would all be much better off, and we wouldn't have the need for as much human curation.

    1. I literally couldn’t remember when I’d last looked at my RSS subscriptions. On the surface, that might seem like a win: Instead of painstakingly curating my own incoming news, I can effortlessly find an endless supply of interesting, worthwhile content that the algorithm finds for me. The problem, of course, is that the algorithm isn’t neutral: It’s the embodiment of Facebook and Twitter’s technology, data analysis, and most crucial, business model. By relying on the algorithm, instead of on tags and RSS, I’m letting an army of web developers, business strategists, data scientists, and advertisers determine what gets my attention. I’m leaving myself vulnerable to misinformation, and manipulation, and giving up my power of self-determination.
    1. Schemas aren't neutral

      This section highlights why relying on algorithmic feeds in social media platforms like Facebook and Twitter can be toxic. Your feed is full of what they think you'll like and click on instead of giving you the choice.

    1. Third, content collapse puts all types of information into direct competition. The various producers and providers of content, from journalists to influencers to politicians to propagandists, all need to tailor their content and its presentation to the algorithms that determine what people see. The algorithms don’t make formal or qualitative distinctions; they judge everything by the same criteria. And those criteria tend to promote oversimplification, emotionalism, tendentiousness, tribalism — the qualities that make a piece of information stand out, at least momentarily, from the screen’s blur.

      This is a terrifically painful and harmful thing. How can we redesign a system that doesn't function this way?

  25. Feb 2020
    1. The biggest drawback of algorithmic feeds is that you might be looking at irrelevant content. When you see something on your timeline and want to comment, you will have to check the timestamp to see if your comment is still relevant or not.
  26. Dec 2019
    1. Alexander Samuel reflects on tagging and its origins as a backbone to the social web. Along with RSS, tags allowed users to connect and collate content using such tools as feed readers. This all changed with the advent of social media and the algorithmically curated news feed.

      Tags were used for discovery of specific types of content. Who needs that now that our new overlords of artificial intelligence and algorithmic feeds can tell us what we want to see?!

      Of course we still need tags!!! How are you going to know serendipitously that you need more poetry in your life until you run into the tag on a service like IndieWeb.xyz? An algorithmic feed is unlikely to notice--or at least in my decade of living with them I've yet to run into poetry in one.

  27. Apr 2019
    1. As people start following more and more accounts on a social network, they reach a point where the number of candidate stories exceeds their capacity to see them all. Even before that point, the sheer signal-to-noise ratio may decline to the point that it affects engagement. Almost any network that hits this inflection point turns to the same solution: an algorithmic feed.
  28. Feb 2019
    1. The social media browser plugins. Here’s the killer feature. Create at least one (could be many competing) browser plugins that enable you to (a) select feeds and then (b) display them alongside a user’s Twitter, Facebook, etc., feeds. (This could be an adaptation of Greasemonkey.) In other words, once this feature were available, you could tell your friends: “I’m not on Twitter. But if you want to see my Tweet-like posts appear in your Twitter feed, then simply install this plugin and input my feed address. You’ll see my posts pop up just as if they were on Twitter. But they’re not! And we can do this because you can control how any website appears to you from your own browser. It’s totally legal and it’s actually a really good idea.” In this way, while you might never look at Twitter or Facebook, you can stay in contact with your friends who are still there—but on your own terms.

      This is an intriguing idea. In particular, it would be cool if I could input my OPML file of people I'm following and have a plugin like this work with other social readers.

  29. Nov 2018
    1. While the NTK Network does not have a large audience of its own, its content is frequently picked up by popular conservative outlets, including Breitbart.

      One wonders if they're seeding it and spreading it falsely on Facebook? Why not use the problem as a feature?!

  30. Oct 2018
  31. Sep 2018
    1. My relationship is a lot healthier with blogs that I visit when I please. This is another criticism I have with RSS as well—I don’t want my favorite music blog sending me updates every day, always in my face. I just want to go there when I am ready to listen to something new. (I also hope readers to my blog just stop by when they feel like obsessing over the Web with me.)

      Amen!