- Sep 2023
-
www.wired.com www.wired.com
-
According to YouTube chief product officer Neal Mohan, 70 percent of views on YouTube are from recommendations—so the site’s algorithms are largely responsible for amplifying RT’s propaganda hundreds of millions of times.
-
It would be good to remind them that free speech does not mean free reach. There is no right to algorithmic amplification.
-
- Jul 2022
-
herman.bearblog.dev herman.bearblog.dev
-
I dislike the separation of Trending and Newest. This is one of the main reasons for false negatives as new articles don’t receive many (if any) views. I’m thinking about randomly interspersing new articles in the trending feed to give them the potential of getting their first few votes. This (as ever) has an effect on quality, so has to be done with care.
Introducing some randomness for new unranked articles is an interesting and likely useful tactic.
-
Once a post goes viral on Twitter, Hacker News, Reddit, or anywhere else off-platform, it has the potential to form a “Katamari ball” where it gets upvotes because it has upvotes (which means it gets more upvotes, because it has more upvotes, which means…well…you get it). This is also known as "the network effect", but I feel a Katamari ball better illustrates it.
Network effects can describe a broad variety of phenomenon. Is Katamari ball a better descriptor of this specific phenomenon?
How does one prioritize the richer quality Lindy library material that may be even more beneficial than things which are simply new?
-
- May 2022
-
thenewstack.io thenewstack.io
-
“It was 2017, I would say, when Twitter started really cracking down on bots in a way that they hadn’t before — taking down a lot of bad bots, but also taking down a lot of good bots too. There was an appeals process [but] it was very laborious, and it just became very difficult to maintain stuff. And then they also changed all their API’s, which are the programmatic interface for how a bot talks to Twitter. So they changed those without really any warning, and everything broke.
Just like chilling action by political actors, social media corporations can use changes in policy and APIs to stifle and chill speech online.
This doesn't mean that there aren't bad actors building bots to actively cause harm, but there is a class of potentially helpful and useful bots (tools) that can make a social space better or more interesting.
How does one regulate this sort of speech? Perhaps the answer is simply not to algorithmically amplify these bots and their speech over that of humans.
More and more I think that the answer is to make online social interactions more like in person interactions. Too much social media is giving an even bigger bullhorn to the crazy preacher on the corner of Main Street who was shouting at the crowds that simply ignored them. Social media has made it easier for us to shout them back down, and in doing so, we're only making them heard by more. We need a negative feedback mechanism to dampen these effects the same way they would have happened online.
-
- Mar 2022
-
-
First is that it actually lowers paid acquisition costs. It lowers them because the Facebook Ads algorithm rewards engaging advertisements with lower CPMs and lots of distribution. Facebook does this because engaging advertisements are just like engaging posts: they keep people on Facebook.
Engaging advertisements on Facebook benefit from lower acquisition costs because the Facebook algorithm rewards more interesting advertisements with lower CPMs and wider distribution. This is done, as all things surveillance capitalism driven, to keep eyeballs on Facebook.
This isn't too dissimilar to large cable networks that provide free high quality advertising to mass manufacturers in late night slots. The network generally can't sell all of their advertising inventory, particularly in low viewing hours, so they'll offer free or incredibly cheap commercial rates to their bigger buyers (like Coca-Cola or McDonalds, for example) to fill space and have more professional looking advertisements between the low quality advertisements from local mom and pop stores and the "as seen on TV" spots. These higher quality commercials help keep the audience engaged and prevents viewers from changing the channel.
-
- Aug 2021
-
-
Fukuyama's answer is no. Middleware providers will not see privately shared content from a user's friends. This is a good answer if our priority is privacy. It lets my cousin decide which companies to trust with her sensitive personal information. But it hobbles middleware as a tool for responding to her claims about vaccines. And it makes middleware providers far less competitive, since they will not be able to see much of the content we want them to curate.
Is it alright to let this sort of thing go on the smaller scale personal shared level? I would suggest that the issue is not this small scale conversation which can happen linearly, but we need to focus on the larger scale amplification of misinformation by sources. Get rid of the algorithmic amplification of the fringe bits which is polarizing and toxic. Only allow the amplification of the more broadly accepted, fact-based, edited, and curated information.
-