- Nov 2024
-
-
These repeated acts of public description adds each idea to a supersaturated, subconscious solution of fragmentary elements that have the potential to become something bigger. Every now and again, a few of these fragments will stick to each other and nucleate, crystallizing a substantial, synthetic analysis out of all of those bits and pieces I’ve salted into that solution of potential sources of inspiration.
Doctorow analogizes his reading and writing in the same sort of chemistry/statistical mechanics method as I have in the past.
-
The act of making your log-file public requires a rigor that keeping personal notes does not. Writing for a notional audience — particularly an audience of strangers — demands a comprehensive account that I rarely muster when I’m taking notes for myself. I am much better at kidding myself my ability to interpret my notes at a later date than I am at convincing myself that anyone else will be able to make heads or tails of them. Writing for an audience keeps me honest.
-
There’s a version of the “why writers should blog” story that is tawdry and mercenary: “Blog,” the story goes, “and you will build a brand and a platform that you can use to promote your work.” Virtually every sentence that contains the word “brand” is bullshit, and that one is no exception.
-
- Jul 2024
-
pluralistic.net pluralistic.net
-
Daily links from Cory Doctorow by [[Cory Doctorow]] 2024-02-19
-
- Mar 2024
-
pluralistic.net pluralistic.net
-
"The Curse of Recursion: Training on Generated Data Makes Models Forget," a recent paper, goes beyond the ick factor of AI that is fed on botshit and delves into the mathematical consequences of AI coprophagia: https://arxiv.org/abs/2305.17493 Co-author Ross Anderson summarizes the finding neatly: "using model-generated content in training causes irreversible defects": https://www.lightbluetouchpaper.org/2023/06/06/will-gpt-models-choke-on-their-own-exhaust/ Which is all to say: even if you accept the mystical proposition that more training data "solves" the AI problems that constitute total unsuitability for high-value applications that justify the trillions in valuation analysts are touting, that training data is going to be ever more elusive.
-
For people inflating the current AI hype bubble, this idea that making the AI "more powerful" will correct its defects is key. Whenever an AI "hallucinates" in a way that seems to disqualify it from the high-value applications that justify the torrent of investment in the field, boosters say, "Sure, the AI isn't good enough…yet. But once we shovel an order of magnitude more training data into the hopper, we'll solve that, because (as everyone knows) making the computer 'more powerful' solves the AI problem"
-
As the lawyers say, this "cites facts not in evidence." But let's stipulate that it's true for a moment. If all we need to make the AI better is more training data, is that something we can count on? Consider the problem of "botshit," Andre Spicer and co's very useful coinage describing "inaccurate or fabricated content" shat out at scale by AIs: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4678265 "Botshit" was coined last December, but the internet is already drowning in it. Desperate people, confronted with an economy modeled on a high-speed game of musical chairs in which the opportunities for a decent livelihood grow ever scarcer, are being scammed into generating mountains of botshit in the hopes of securing the elusive "passive income": https://pluralistic.net/2024/01/15/passive-income-brainworms/#four-hour-work-week
-
- Dec 2022
-
voices.uchicago.edu voices.uchicago.edu
-
Censorship and Information Control During Information RevolutionsExploring how new information technologies from the printing press to the digital age have stimulated new forms of censorship and information control.
https://voices.uchicago.edu/censorship/
Related YouTube channel/videos: https://www.youtube.com/channel/UCeNP7NIWmB70wFBv9QolYkg
-
-
pluralistic.net pluralistic.net
-
But there's another side to this playlistification of feeds: playlists and other recommendation algorithms are chokepoints: they are a way to durably interpose a company between a creator and their audience. Where you have chokepoints, you get chokepoint capitalism: https://chokepointcapitalism.com/
Massive social media networks using algorithmic feeds and other programmatic and centralizing methods to interpose themselves between people trying to reach each other, often in ways which allow them to extract additional value from the participants. They become necessary platforms which create chokepoints for flows of information which Cory Doctorow and Rebecca Giblin call "chokepoint capitalism".
-
- Apr 2022
-
www.youtube.com www.youtube.com
-
using rome as a almost a tool to convey information to your future self
One's note taking is not only a conversation with the text or even the original author, it is also a conversation you're having with your future self. This feature is accelerated when one cross links ideas within their note box with each other and revisits them at regular intervals.
Example of someone who uses Roam Research and talks about the prevalence of using it as a "conversation with your future self."
This is very similar to the same patterns that can be seen in the commonplace book tradition, and even in the blogosphere (Cory Doctorow comes to mind), or IndieWeb which often recommends writing on your own website to document how you did things for your future self.
-
- Jan 2022
-
pluralistic.net pluralistic.net
-
I go through my old posts every day. I know that much – most? – of them are not for the ages. But some of them are good. Some, I think, are great. They define who I am. They're my outboard brain.
Cory Doctorow calls his blog and its archives his "outboard brain".
-
First and foremost, I do it for me. The memex I've created by thinking about and then describing every interesting thing I've encountered is hugely important for how I understand the world. It's the raw material of every novel, article, story and speech I write.
On why Cory Doctorow keeps a digital commonplace book.
-
- Jan 2016
-
www.locusmag.com www.locusmag.com
-
Those questions are hard, but they’re not wicked.
-
they totally broke the FCC’s regulatory model
The problem with software-defined radios
-