17 Matching Annotations
  1. Feb 2023
    1. But a good short story is always basically a memento mori.

      An interesting theory...

      An Occurrence at Owl Creek Bridge by Ambrose Bierce comes to mind as an excellent example.

    2. I have to report that the AI did not make a useful or pleasant writing partner. Even a state-of-the-art language model cannot presently “understand” what a fiction writer is trying to accomplish in an evolving draft. That’s not unreasonable; often, the writer doesn’t know exactly what they’re trying to accom­plish! Often, they are writing to find out.
  2. Sep 2022
  3. Jun 2022
    1. It was the expe­ri­ence of draft­ing the spec that changed my view, and my pace. Writing! Gets you every time!
    2. I will just observe that there is some­thing about this tech­nol­ogy that has seemed, over the years, to scold rather than invite; enclose rather than expand; and strip away rather than layer upon.
  4. May 2022
    1. Robin Sloan, a writer and media inventor, asks reviewers of his forthcoming book, Mr. Penumbra's Twenty Four Hour Book Store, to share their "mental state" via marginalia. Developing a visual language for real-time annotations, he welcomes people to go through his text at a reader's pace, marking their reactions in real time.
  5. Mar 2022
  6. Feb 2022
    1. We need to getour thoughts on paper first and improve them there, where we canlook at them. Especially complex ideas are difficult to turn into alinear text in the head alone. If we try to please the critical readerinstantly, our workflow would come to a standstill. We tend to callextremely slow writers, who always try to write as if for print,perfectionists. Even though it sounds like praise for extremeprofessionalism, it is not: A real professional would wait until it wastime for proofreading, so he or she can focus on one thing at a time.While proofreading requires more focused attention, finding the rightwords during writing requires much more floating attention.

      Proofreading while rewriting, structuring, or doing the thinking or creative parts of writing is a form of bikeshedding. It is easy to focus on the small and picayune fixes when writing, but this distracts from the more important parts of the work which really need one's attention to be successful.

      Get your ideas down on paper and only afterwards work on proofreading at the end. Switching contexts from thinking and creativity to spelling, small bits of grammar, and typography can be taxing from the perspective of trying to multi-task.


      Link: Draft #4 and using Webster's 1913 dictionary for choosing better words/verbiage as a discrete step within the rewrite.


      Linked to above: Are there other dictionaries, thesauruses, books of quotations, or individual commonplace books, waste books that can serve as resources for finding better words, phrases, or phrasing when writing? Imagine searching through Thoreau's commonplace book for finding interesting turns of phrase. Naturally searching through one's own commonplace book is a great place to start, if you're saving those sorts of things, especially from fiction.

      Link this to Robin Sloan's AI talk and using artificial intelligence and corpuses of literature to generate writing.

    1. https://every.to/superorganizers/tasting-notes-with-robin-sloan-25629085

      A discussion with Robin Sloan about the creativity portion of his writing practice which is heavily driven by his store of creative notes which he takes in a Field Notes waste book and keeps in nvAlt.

    2. I've observed for myself that not all weeks of writing are made equal. When I do try to impose a schedule on myself – like resolving that ‘I'll write for three hours every day and hit 1200 words’ – it can work out OK, but it’s usually not that great.But I have learned that when I’m really on a roll – when I’ve found a voice that’s really working and that I’m excited about – I need to just clear the decks and go with it. I will empty my schedule, dive in, and stay up late in order to be as productive as I can. I would say this is how I got both of my novels written.

      Robin Sloan's writing process sounds similar to that of Niklas Luhmann where he chose to work on things that seemed exciting and fun. This is, in part, helped by having a large quantity of interesting notes to work off of. They both used them as stores to fire their internal motivation to get work done.

    3. The third way I interact with my notes is a mechanism I’ve engineered whereby they are slowly presented to me randomly, and on a steady drip, every day.I’ve created a system so random notes appear every time I open a browser tabI like the idea of being presented and re-presented with my notations of things that were interesting to me at some point, but that in many cases I had forgotten about. The effect of surprise creates interesting and productive new connections in my brain.

      Robin Sloan has built a system that will present him with random notes from his archive every time he opens a browser tab.

    4. That ‘taste’ is a very personal thing, and I don’t think I can really explain it. But I’m pretty sure it means that, for me, note-taking is a very long-term, gradual process of finding my way towards something; I just can’t quite articulate what that something is.

      I like the idea of taking notes as a means of finding one's way towards something which can't be articulated.

      This is an interesting way that one could define insight.

    5. Transferring my notes from notebooks into nvALT is a process that I always enjoy. When I fill up a physical notebook, I'll go through it, acting as a sort of loose, first filter for the material I’ve accumulated. I’ll cross out a few things that are obviously garbage, but most of my notes make the cut, and I transcribe them into nvALT.When that’s done, I throw away the notebook.

      Robin Sloan has a waste book practice where he takes his notes in small Field Note notebooks and transcribes them into nvAlt. When he's done, he throws away the notebook.

  7. Jan 2022
    1. https://vimeo.com/232545219

      from: Eyeo Conference 2017

      Description

      Robin Sloan at Eyeo 2017 | Writing with the Machine | Language models built with recurrent neural networks are advancing the state of the art on what feels like a weekly basis; off-the-shelf code is capable of astonishing mimicry and composition. What happens, though, when we take those models off the command line and put them into an interactive writing environment? In this talk Robin presents demos of several tools, including one presented here for the first time. He discusses motivations and process, shares some technical tips, proposes a course for the future — and along the way, write at least one short story together with the audience: all of us, and the machine.

      Notes

      Robin created a corpus using If Magazine and Galaxy Magazine from the Internet Archive and used it as a writing tool. He talks about using a few other models for generating text.

      Some of the idea here is reminiscent of the way John McPhee used the 1913 Webster Dictionary for finding words (or le mot juste) for his work, as tangentially suggested in Draft #4 in The New Yorker (2013-04-22)

      Cross reference: https://hypothes.is/a/t2a9_pTQEeuNSDf16lq3qw and https://hypothes.is/a/vUG82pTOEeu6Z99lBsrRrg from https://jsomers.net/blog/dictionary


      Croatian acapella singing: klapa https://www.youtube.com/watch?v=sciwtWcfdH4


      Writing using the adjacent possible.


      Corpus building as an art [~37:00]

      Forgetting what one trained their model on and then seeing the unexpected come out of it. This is similar to Luhmann's use of the zettelkasten as a serendipitous writing partner.

      Open questions

      How might we use information theory to do this more easily?

      What does a person or machine's "hand" look like in the long term with these tools?

      Can we use corpus linguistics in reverse for this?

      What sources would you use to train your model?

      References:

      • Andrej Karpathy. 2015. "The Unreasonable Effectiveness of Recurrent Neural Networks"
      • Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, et al. "Generating sentences from a continuous space." 2015. arXiv: 1511.06349
      • Stanislau Semeniuta, Aliaksei Severyn, and Erhardt Barth. 2017. "A Hybrid Convolutional Variational Autoencoder for Text generation." arXiv:1702.02390
      • Soroush Mehri, et al. 2017. "SampleRNN: An Unconditional End-to-End Neural Audio Generation Model." arXiv:1612.07837 applies neural networks to sound and sound production