38 Matching Annotations
  1. Nov 2023
  2. Oct 2023
  3. Sep 2023
  4. Aug 2023
  5. May 2023
  6. Mar 2023
  7. Feb 2023
    1. Sloan, Robin. “Author’s Note.” Experimental fiction. Wordcraft Writers Workshop, November 2022. https://wordcraft-writers-workshop.appspot.com/stories/robin-sloan.

      brilliant!

    2. "I have affirmed the premise that the enemy can be so simple as a bundle of hate," said he. "What else? I have extinguished the light of a story utterly.

      How fitting that the amanuensis in a short story written with the help of artificial intelligence has done the opposite of what the author intended!

    1. Wordcraft Writers Workshop by Andy Coenen - PAIR, Daphne Ippolito - Brain Research Ann Yuan - PAIR, Sehmon Burnam - Magenta

      cross reference: ChatGPT

    2. LaMDA's safety features could also be limiting: Michelle Taransky found that "the software seemed very reluctant to generate people doing mean things". Models that generate toxic content are highly undesirable, but a literary world where no character is ever mean is unlikely to be interesting.
    3. A recurring theme in the authors’ feedback was that Wordcraft could not stick to a single narrative arc or writing direction.

      When does using an artificial intelligence-based writing tool make the writer an editor of the computer's output rather than the writer themself?

    4. If I were going to use an AI, I'd want to plugin and give massive priority to my commonplace book and personal notes followed by the materials I've read, watched, and listened to secondarily.

    5. Several participants noted the occasionally surreal quality of Wordcraft's suggestions.

      Wordcraft's hallucinations can create interesting and creatively surreal suggestions.

      How might one dial up or down the ability to hallucinate or create surrealism within an artificial intelligence used for thinking, writing, etc.?

    6. Writers struggled with the fickle nature of the system. They often spent a great deal of time wading through Wordcraft's suggestions before finding anything interesting enough to be useful. Even when writers struck gold, it proved challenging to consistently reproduce the behavior. Not surprisingly, writers who had spent time studying the technical underpinnings of large language models or who had worked with them before were better able to get the tool to do what they wanted.

      Because one may need to spend an inordinate amount of time filtering through potentially bad suggestions of artificial intelligence, the time and energy spent keeping a commonplace book or zettelkasten may pay off magnificently in the long run.

    7. Many authors noted that generations tended to fall into clichés, especially when the system was confronted with scenarios less likely to be found in the model's training data. For example, Nelly Garcia noted the difficulty in writing about a lesbian romance — the model kept suggesting that she insert a male character or that she have the female protagonists talk about friendship. Yudhanjaya Wijeratne attempted to deviate from standard fantasy tropes (e.g. heroes as cartographers and builders, not warriors), but Wordcraft insisted on pushing the story toward the well-worn trope of a warrior hero fighting back enemy invaders.

      Examples of artificial intelligence pushing toward pre-existing biases based on training data sets.

    8. Wordcraft tended to produce only average writing.

      How to improve on this state of the art?

    9. “...it can be very useful for coming up with ideas out of thin air, essentially. All you need is a little bit of seed text, maybe some notes on a story you've been thinking about or random bits of inspiration and you can hit a button that gives you nearly infinite story ideas.”- Eugenia Triantafyllou

      Eugenia Triantafyllou is talking about crutches for creativity and inspiration, but seems to miss the value of collecting interesting tidbits along the road of life that one can use later. Instead, the emphasis here becomes one of relying on an artificial intelligence doing it for you at the "hit of a button". If this is the case, then why not just let the artificial intelligence do all the work for you?

      This is the area where the cultural loss of mnemonics used in orality or even the simple commonplace book will make us easier prey for (over-)reliance on technology.


      Is serendipity really serendipity if it's programmed for you?

    10. Wordcraft shined the most as a brainstorming partner and source of inspiration. Writers found it particularly useful for coming up with novel ideas and elaborating on them. AI-powered creative tools seem particularly well suited to sparking creativity and addressing the dreaded writer's block.

      Just as using a text for writing generative annotations (having a conversation with a text) is a useful exercise for writers and thinkers, creative writers can stand to have similar textual creativity prompts.

      Compare Wordcraft affordances with tools like Nabokov's card index (zettelkasten) method, Twyla Tharp's boxes, MadLibs, cadavre exquis, et al.

      The key is to have some sort of creativity catalyst so that one isn't working in a vacuum or facing the dreaded blank page.

    11. Our team at Google Research built Wordcraft, an AI-powered text editor centered on story writing, to see how far we could push the limits of this technology.
    1. Author's note by Robin Sloan<br /> November 2022

    2. I have to report that the AI did not make a useful or pleasant writing partner. Even a state-of-the-art language model cannot presently “understand” what a fiction writer is trying to accomplish in an evolving draft. That’s not unreasonable; often, the writer doesn’t know exactly what they’re trying to accom­plish! Often, they are writing to find out.
    3. First, I’m impressed as hell by the Wordcraft team. Daphne Ippolito, Ann Yuan, Andy Coenen, Sehmon Burnam, and their colleagues engi­neered an impres­sive, provoca­tive writing tool, but/and, more importantly, they inves­ti­gated its use with sensi­tivity and courage.
  8. Aug 2022
    1. For the sake of simplicity, go to Graph Analysis Settings and disable everything but Co-Citations, Jaccard, Adamic Adar, and Label Propogation. I won't spend my time explaining each because you can find those in the net, but these are essentially algorithms that find connections for you. Co-Citations, for example, uses second order links or links of links, which could generate ideas or help you create indexes. It essentially automates looking through the backlinks and local graphs as it generates possible relations for you.
  9. Jan 2022
    1. https://vimeo.com/232545219

      from: Eyeo Conference 2017

      Description

      Robin Sloan at Eyeo 2017 | Writing with the Machine | Language models built with recurrent neural networks are advancing the state of the art on what feels like a weekly basis; off-the-shelf code is capable of astonishing mimicry and composition. What happens, though, when we take those models off the command line and put them into an interactive writing environment? In this talk Robin presents demos of several tools, including one presented here for the first time. He discusses motivations and process, shares some technical tips, proposes a course for the future — and along the way, write at least one short story together with the audience: all of us, and the machine.

      Notes

      Robin created a corpus using If Magazine and Galaxy Magazine from the Internet Archive and used it as a writing tool. He talks about using a few other models for generating text.

      Some of the idea here is reminiscent of the way John McPhee used the 1913 Webster Dictionary for finding words (or le mot juste) for his work, as tangentially suggested in Draft #4 in The New Yorker (2013-04-22)

      Cross reference: https://hypothes.is/a/t2a9_pTQEeuNSDf16lq3qw and https://hypothes.is/a/vUG82pTOEeu6Z99lBsrRrg from https://jsomers.net/blog/dictionary


      Croatian acapella singing: klapa https://www.youtube.com/watch?v=sciwtWcfdH4


      Writing using the adjacent possible.


      Corpus building as an art [~37:00]

      Forgetting what one trained their model on and then seeing the unexpected come out of it. This is similar to Luhmann's use of the zettelkasten as a serendipitous writing partner.

      Open questions

      How might we use information theory to do this more easily?

      What does a person or machine's "hand" look like in the long term with these tools?

      Can we use corpus linguistics in reverse for this?

      What sources would you use to train your model?

      References:

      • Andrej Karpathy. 2015. "The Unreasonable Effectiveness of Recurrent Neural Networks"
      • Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, et al. "Generating sentences from a continuous space." 2015. arXiv: 1511.06349
      • Stanislau Semeniuta, Aliaksei Severyn, and Erhardt Barth. 2017. "A Hybrid Convolutional Variational Autoencoder for Text generation." arXiv:1702.02390
      • Soroush Mehri, et al. 2017. "SampleRNN: An Unconditional End-to-End Neural Audio Generation Model." arXiv:1612.07837 applies neural networks to sound and sound production