42 Matching Annotations
  1. Nov 2023
  2. Oct 2023
  3. Aug 2023
  4. Jun 2023
  5. May 2023
  6. Mar 2023
    1. Bender, Emily M., Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜” In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–23. FAccT ’21. New York, NY, USA: Association for Computing Machinery, 2021. https://doi.org/10.1145/3442188.3445922.

      Would the argument here for stochastic parrots also potentially apply to or could it be abstracted to Markov monkeys?

  7. Feb 2023
    1. LaMDA's safety features could also be limiting: Michelle Taransky found that "the software seemed very reluctant to generate people doing mean things". Models that generate toxic content are highly undesirable, but a literary world where no character is ever mean is unlikely to be interesting.
    2. If I were going to use an AI, I'd want to plugin and give massive priority to my commonplace book and personal notes followed by the materials I've read, watched, and listened to secondarily.

    3. Several participants noted the occasionally surreal quality of Wordcraft's suggestions.

      Wordcraft's hallucinations can create interesting and creatively surreal suggestions.

      How might one dial up or down the ability to hallucinate or create surrealism within an artificial intelligence used for thinking, writing, etc.?

    4. Writers struggled with the fickle nature of the system. They often spent a great deal of time wading through Wordcraft's suggestions before finding anything interesting enough to be useful. Even when writers struck gold, it proved challenging to consistently reproduce the behavior. Not surprisingly, writers who had spent time studying the technical underpinnings of large language models or who had worked with them before were better able to get the tool to do what they wanted.

      Because one may need to spend an inordinate amount of time filtering through potentially bad suggestions of artificial intelligence, the time and energy spent keeping a commonplace book or zettelkasten may pay off magnificently in the long run.

  8. Nov 2022
  9. Oct 2022
    1. https://www.explainpaper.com/

      Another in a growing line of research tools for processing and making sense of research literature including Research Rabbit, Connected Papers, Semantic Scholar, etc.

      Functionality includes the ability to highlight sections of research papers with natural language processing to explain what those sections mean. There's also a "chat" that allows you to ask questions about the paper which will attempt to return reasonable answers, which is an artificial intelligence sort of means of having an artificial "conversation with the text".

      cc: @dwhly @remikalir @jeremydean

  10. Sep 2022
  11. Aug 2022
    1. For the sake of simplicity, go to Graph Analysis Settings and disable everything but Co-Citations, Jaccard, Adamic Adar, and Label Propogation. I won't spend my time explaining each because you can find those in the net, but these are essentially algorithms that find connections for you. Co-Citations, for example, uses second order links or links of links, which could generate ideas or help you create indexes. It essentially automates looking through the backlinks and local graphs as it generates possible relations for you.
  12. Jun 2022
    1. Harness collective intelligence augmented by digital technology, and unlock exponential innovation. Beyond old hierarchical structures and archaic tools.


      The words "beyond", "hierarchical", and "archaic" are all designed to marginalize prior thought and tools which all work, and are likely upon which this broader idea is built. This is a potentially toxic means of creating "power over" this prior art rather than a more open spirit of "power with".

  13. Jan 2022
    1. https://vimeo.com/232545219

      from: Eyeo Conference 2017


      Robin Sloan at Eyeo 2017 | Writing with the Machine | Language models built with recurrent neural networks are advancing the state of the art on what feels like a weekly basis; off-the-shelf code is capable of astonishing mimicry and composition. What happens, though, when we take those models off the command line and put them into an interactive writing environment? In this talk Robin presents demos of several tools, including one presented here for the first time. He discusses motivations and process, shares some technical tips, proposes a course for the future — and along the way, write at least one short story together with the audience: all of us, and the machine.


      Robin created a corpus using If Magazine and Galaxy Magazine from the Internet Archive and used it as a writing tool. He talks about using a few other models for generating text.

      Some of the idea here is reminiscent of the way John McPhee used the 1913 Webster Dictionary for finding words (or le mot juste) for his work, as tangentially suggested in Draft #4 in The New Yorker (2013-04-22)

      Cross reference: https://hypothes.is/a/t2a9_pTQEeuNSDf16lq3qw and https://hypothes.is/a/vUG82pTOEeu6Z99lBsrRrg from https://jsomers.net/blog/dictionary

      Croatian acapella singing: klapa https://www.youtube.com/watch?v=sciwtWcfdH4

      Writing using the adjacent possible.

      Corpus building as an art [~37:00]

      Forgetting what one trained their model on and then seeing the unexpected come out of it. This is similar to Luhmann's use of the zettelkasten as a serendipitous writing partner.

      Open questions

      How might we use information theory to do this more easily?

      What does a person or machine's "hand" look like in the long term with these tools?

      Can we use corpus linguistics in reverse for this?

      What sources would you use to train your model?


      • Andrej Karpathy. 2015. "The Unreasonable Effectiveness of Recurrent Neural Networks"
      • Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, et al. "Generating sentences from a continuous space." 2015. arXiv: 1511.06349
      • Stanislau Semeniuta, Aliaksei Severyn, and Erhardt Barth. 2017. "A Hybrid Convolutional Variational Autoencoder for Text generation." arXiv:1702.02390
      • Soroush Mehri, et al. 2017. "SampleRNN: An Unconditional End-to-End Neural Audio Generation Model." arXiv:1612.07837 applies neural networks to sound and sound production
    1. Markoff, a long-time chronicler of computing, sees Engelbart as one pole in a decades-long competition "between artificial intelligence and intelligence augmentation -- A.I. versus I.A."

      There is an interesting difference between artificial intelligence and intelligence automation. Index cards were already doing the second by the early 1940s.

  14. May 2021
  15. Mar 2021
  16. Jun 2020