409 Matching Annotations
  1. Last 7 days
    1. https://chat.openai.com/g/g-z5XcnT7cQ-zettel-critique-assistant

      Zettel Critique Assistant<br /> By Florian Lengyel<br /> Critique Zettels following three rules: Zettels should have a single focus, WikiLinks indicate a shift in focus, Zettels should be written for your future self. The GPT will suggest how to split multi-focused notes into separate notes. Create structure note from a list of note titles and abstracts.

      ᔥ[[ZettelDistraction]] in Share with us what is happening in your ZK this week. February 20, 2024

  2. Feb 2024
    1. Despite the opportunities of AI-based technologies for teaching and learning, they have also ethical issues.

      Yes, I agree with this statement. Ethical issues range from academic integrity concerns to data privacy. AI technology based on algorithmic applications intentionally collects human data from its users and they do not specifically know what kind of data and what quantities of them are collected

    1. Joy, Bill. “Why the Future Doesn’t Need Us.” Wired, April 1, 2000. https://www.wired.com/2000/04/joy-2/.

      Annotation url: urn:x-pdf:753822a812c861180bef23232a806ec0

      Annotations: https://jonudell.info/h/facet/?user=chrisaldrich&url=urn%3Ax-pdf%3A753822a812c861180bef23232a806ec0&max=100&exactTagSearch=true&expanded=true

    2. The experiences of the atomic scientists clearly show the need to takepersonal responsibility, the danger that things will move too fast, andthe way in which a process can take on a life of its own. We can, as theydid, create insurmountable problems in almost no time flat. We mustdo more thinking up front if we are not to be similarly surprised andshocked by the consequences of our inventions.

      Bill Joy's mention that insurmountable problems can "take on a life of [their] own" is a spectacular reason for having a solid definition of what "life" is, so that we might have better means of subverting it in specific and potentially catastrophic situations.

    3. The GNR technologies do not divide clearly into commercial andmilitary uses; given their potential in the market, it’s hard to imaginepursuing them only in national laboratories. With their widespreadcommercial pursuit, enforcing relinquishment will require a verificationregime similar to that for biological weapons, but on an unprecedentedscale. This, inevitably, will raise tensions between our individual pri-vacy and desire for proprietary information, and the need for verifica-tion to protect us all. We will undoubtedly encounter strong resistanceto this loss of privacy and freedom of action.

      While Joy looks at the Biological and Chemical Weapons Conventions as well as nuclear nonproliferation ideas, the entirety of what he's looking at is also embedded in the idea of gun control in the United States as well. We could choose better, but we actively choose against our better interests.

      What role does toxic capitalism have in pushing us towards these antithetical goals? The gun industry and gun lobby have had tremendous interest on that front. Surely ChatGPT and other LLM and AI tools will begin pushing on the profitmaking levers shortly.

    4. Now, as then, we are creators of new technologies and stars of theimagined future, driven—this time by great financial rewards andglobal competition—despite the clear dangers, hardly evaluating whatit may be like to try to live in a world that is the realistic outcome ofwhat we are creating and imagining.
  3. Jan 2024
    1. How soon could such an intelligent robot be built? The coming ad-vances in computing power seem to make it possible by 2030.

      In 2000, Bill Joy predicted that advances in computing would allow an intelligent robot to be built by 2030.

    2. in hishistory of such ideas, Darwin Among the Machines, George Dysonwarns: “In the game of life and evolution there are three players at thetable: human beings, nature, and machines. I am firmly on the side ofnature. But nature, I suspect, is on the side of the machines.”
    3. Uncontrolledself-replication in these newer technologies runs a much greater risk: arisk of substantial damage in the physical world.

      As a case in point, the self-replication of misinformation on social media networks has become a substantial physical risk in the early 21st century causing not only swings in elections, but riots, take overs, swings in the stock market (GameStop short squeeze January 2021), and mob killings. It is incredibly difficult to create risk assessments for these sorts of future harms.

      In biology, we see major damage to a wide variety of species as the result of uncontrolled self-replication. We call it cancer.

      We also see programmed processes in biological settings including apoptosis and necrosis as means of avoiding major harms. What might these look like with respect to artificial intelligence?

    4. Moravec’s view is that the robots will eventually suc-ceed us—that humans clearly face extinction.

      Joy contends that one of Hans Moravec's views in his book Robot: Mere Machine to Transcendent Mind is that robots will push the human species into extinction in much the same way that early North American placental species eliminated the South American marsupials.

    5. Our overuse of antibiotics has led to what may be thebiggest such problem so far: the emergence of antibiotic-resistant andmuch more dangerous bacteria. Similar things happened when attemptsto eliminate malarial mosquitoes using DDT caused them to acquireDDT resistance; malarial parasites likewise acquired multi-drug-resistant genes.

      Just as mosquitoes can "acquire" (evolve) DDT resistance or bacteria might evolve antiobiotic-resistance, might not humans evolve AI resistance? How fast might we do this? On what timeline? Will the pressure be slowly built up over time, or will the onset be so quick that extinction is the only outcome?

    1. Use Glaze, a system designed to protect human artists by disrupting style mimicry, to protect what you create from being stolen under the guise of 'training AI'; the term should really be 'thievery'.

  4. Dec 2023
    1. Matt GrossMatt Gross (He/Him) • 1st (He/Him) • 1st Vice President, Digital Initiatives at Archetype MediaVice President, Digital Initiatives at Archetype Media 4d • 4d • So, here's an interesting project I launched two weeks ago: The HistoryNet Podcast, a mostly automated transformation of HistoryNet's archive of 25,000+ stories into an AI-driven daily podcast, powered by Instaread and Zapier. The voices are pretty good! The stories are better than pretty good! The implications are... maybe terrifying? Curious to hear what you think. Listen at https://lnkd.in/emUTduyC or, as they always say, "wherever you get your podcasts."

      https://www.linkedin.com/feed/update/urn:li:activity:7142905086325780480/

      One can now relatively easily use various tools in combination with artificial intelligence-based voices and reading to convert large corpuses of text into audiobooks, podcasts or other spoken media.

    1. there's this broader issue of of being able to get inside other people's heads as we're driving down the road all the time we're looking at other 00:48:05 people and because we have very advanced theories of mind
      • for: comparison - AI - HI - example - driving, comparison - artificial i human intelligence - example - driving
    2. in my view the biggest the most dangerous phenomenon on the human on our planet is uh human stupidity it's not artificial intelligence
      • for: meme - human stupidity is more dangerous than artificial intelligence

      • meme: human stupidity is more dangerous than artificial intelligence

      • author:Nikola Danaylov
      • date: 2021
  5. Nov 2023
    1. I use expiration dates and refrigerators to make a point about #AI and over-reliance, and @dajb uses ducks. #nailingit @weareopencoop

      —epilepticrabbit @epilepticrabbit@social.coop on Nov 09, 2023, 11:51 at https://mastodon.social/@epilepticrabbit@social.coop/111382329524902140

    1. As an ex-Viv (w/ Siri team) eng, let me help ease everyone's future trauma as well with the Fundamentals of Assisted Intelligence.<br><br>Make no mistake, OpenAI is building a new kind of computer, beyond just an LLM for a middleware / frontend. Key parts they'll need to pull it off:… https://t.co/uIbMChqRF9

      — Rob Phillips 🤖🦾 (@iwasrobbed) October 29, 2023
      <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
  6. Oct 2023
    1. Wang et. al. "Scientific discovery in the age of artificial intelligence", Nature, 2023.

      A paper about the current state of using AI/ML for scientific discovery, connected with the AI4Science workshops at major conferences.

      (NOTE: since Springer/Nature don't allow public pdfs to be linked without a paywall, we can't use hypothesis directly on the pdf of the paper, this link is to the website version of it which is what we'll use to guide discussion during the reading group.)

    1. Envisioning the next wave of emergent AI

      Are we stretching too far by saying that AI are currently emergent? Isn't this like saying that card indexes of the early 20th century are computers. In reality they were data storage and the "computing" took place when humans did the actual data processing/thinking to come up with new results.

      Emergence would seem to actually be the point which comes about when the AI takes its own output and continues processing (successfully) on it.

  7. Sep 2023
    1. R.U.R.: Rossum’s Universal Robots, drama in three acts by Karel Čapek, published in 1920 and performed in 1921. This cautionary play, for which Čapek invented the word robot (derived from the Czech word for forced labour), involves a scientist named Rossum who discovers the secret of creating humanlike machines. He establishes a factory to produce and distribute these mechanisms worldwide. Another scientist decides to make the robots more human, which he does by gradually adding such traits as the capacity to feel pain. Years later, the robots, who were created to serve humans, have come to dominate them completely.
    1. What do you do then? You can take the book to someone else who, you think, can read better than you, and have him explain the parts that trouble you. ("He" may be a living person or another book-a commentary or textbook. )

      This may be an interesting use case for artificial intelligence tools like ChatGPT which can provide the reader of complex material with simplified synopses to allow better penetration of the material (potentially by removing jargon, argot, etc.)

    2. Active Reading

      He then pushes a button and "plays back" the opinion whenever it seems appropriate to do so. He has performed acceptably without having had to think.

      This seems to be a reasonable argument to make for those who ask, why read? why take notes? especially when we can use search and artificial intelligence to do the work for us. Can we really?

  8. Aug 2023
  9. Jul 2023
    1. Epstein, Ziv, Hertzmann, Aaron, Herman, Laura, Mahari, Robert, Frank, Morgan R., Groh, Matthew, Schroeder, Hope et al. "Art and the science of generative AI: A deeper dive." ArXiv, (2023). Accessed July 21, 2023. https://doi.org/10.1126/science.adh4451.

      Abstract

      A new class of tools, colloquially called generative AI, can produce high-quality artistic media for visual arts, concept art, music, fiction, literature, video, and animation. The generative capabilities of these tools are likely to fundamentally alter the creative processes by which creators formulate ideas and put them into production. As creativity is reimagined, so too may be many sectors of society. Understanding the impact of generative AI - and making policy decisions around it - requires new interdisciplinary scientific inquiry into culture, economics, law, algorithms, and the interaction of technology and creativity. We argue that generative AI is not the harbinger of art's demise, but rather is a new medium with its own distinct affordances. In this vein, we consider the impacts of this new medium on creators across four themes: aesthetics and culture, legal questions of ownership and credit, the future of creative work, and impacts on the contemporary media ecosystem. Across these themes, we highlight key research questions and directions to inform policy and beneficial uses of the technology.

    1. Inserting a maincards with lack of memory .t3_14ot4na._2FCtq-QzlfuN-SwVMUZMM3 { --postTitle-VisitedLinkColor: #9b9b9b; --postTitleLink-VisitedLinkColor: #9b9b9b; --postBodyLink-VisitedLinkColor: #989898; } Lihmann's system of inserting a maincard is fundamentally based on a person's ability to remember there are other maincards already inserted that would be related to the card you want to insert.What if you have very poor memory like many people do, what is your process of inserting maincards?In my Antinet I handled it in an enhanced method from what I did in my 27 yrs of research notebooks which is very different then Lihmann's method.

      reply to u/drogers8 at https://www.reddit.com/r/antinet/comments/14ot4na/inserting_a_maincards_with_lack_of_memory/

      I would submit that your first sentence is wildly false.

      What topic(s) cover your newly made cards? Look those up in your index and find where those potentially related cards are (whether you remember them or not). Go to that top level card listed in your index and see what's there or in the section of cards that come after it. Find the best card in that branch and file your new card(s) as appropriate. If necessary, cross-index them with sub-topics in your index to make them more findable in the future. If you don't find one or more of those topics in your index, then create a new branch and start an index entry for one or more of those terms. (You'll find yourself making lots of index entries to start, but it will eventually slow down—though it shouldn't stop—as your collection grows.)

      Ideally, with regular use, you'll likely remember more and more, especially for active areas you're really interested in. However, take comfort that the system is designed to let you forget everything! This forgetting will actually help create future surprise as well as serendipity that will actually be beneficial for potentially generating new ideas as you use (and review) your notes.

      And if you don't believe me, consider that Alberto Cevolini edited an entire book, broadly about these techniques—including an entire chapter on Luhmann—, which he aptly named Forgetting Machines!

  10. Jun 2023
    1. Reflection enters the picture when we want to allow agents to reflect uponthemselves and their own thoughts, beliefs, and plans. Agents that have thisability we call introspective agents.
  11. learn-us-east-1-prod-fleet01-xythos.content.blackboardcdn.com learn-us-east-1-prod-fleet01-xythos.content.blackboardcdn.com
    1. The problem with that presumption is that people are alltoo willing to lower standards in order to make the purported newcomer appear smart. Justas people are willing to bend over backwards and make themselves stupid in order tomake an AI interface appear smart

      AI has recently become such a big thing in our lives today. For a while I was seeing chatgpt and snapchat AI all over the media. I feel like people ask these sites stupid questions that they already know the answer too because they don't want to take a few minutes to think about the answer. I found a website stating how many people use AI and not surprisingly, it shows that 27% of Americans say they use it several times a day. I can't imagine how many people use it per year.

    1. there is a scenario 00:18:21 uh possibly a likely scenario where we live in a Utopia where we really never have to worry again where we stop messing up our our planet because intelligence is not a bad commodity more 00:18:35 intelligence is good the problems in our planet today are not because of our intelligence they are because of our limited intelligence
      • limited (machine) intelligence

        • cannot help but exist
        • if the original (human) authors of the AI code are themselves limited in their intelligence
      • comment

        • this limitation is essentially what will result in AI progress traps
        • Indeed,
          • progress and their shadow artefacts,
          • progress traps,
          • is the proper framework to analyze the existential dilemma posed by AI
    1. Project Tailwind by Steven Johnson

    2. I’ve also found that Tailwind works extremely well as an extension of my memory. I’ve uploaded my “spark file” of personal notes that date back almost twenty years, and using that as a source, I can ask remarkably open-ended questions—“did I ever write anything about 19th-century urban planning” or “what was the deal with that story about Houdini and Conan Doyle?”—and Tailwind will give me a cogent summary weaving together information from multiple notes. And it’s all accompanied by citations if I want to refer to the original direct quotes for whatever reason.

      This sounds like the sort of personalized AI tool I've been wishing for since the early ChatGPT models if not from even earlier dreams that predate that....

  12. May 2023
    1. Deep Learning (DL) A Technique for Implementing Machine LearningSubfield of ML that uses specialized techniques involving multi-layer (2+) artificial neural networksLayering allows cascaded learning and abstraction levels (e.g. line -> shape -> object -> scene)Computationally intensive enabled by clouds, GPUs, and specialized HW such as FPGAs, TPUs, etc.

      [29] AI - Deep Learning

    1. The object of the present volume is to point out the effects and the advantages which arise from the use of tools and machines ;—to endeavour to classify their modes of action ;—and to trace both the causes and the consequences of applying machinery to supersede the skill and power of the human arm.

      [28] AI - precedents...

    1. Epidemiologist Michael Abramson, who led the research, found that the participants who texted more often tended to work faster but score lower on the tests.

      [21] AI - Skills Erosion

    1. An AI model taught to view racist language as normal is obviously bad. The researchers, though, point out a couple of more subtle problems. One is that shifts in language play an important role in social change; the MeToo and Black Lives Matter movements, for example, have tried to establish a new anti-sexist and anti-racist vocabulary. An AI model trained on vast swaths of the internet won’t be attuned to the nuances of this vocabulary and won’t produce or interpret language in line with these new cultural norms. It will also fail to capture the language and the norms of countries and peoples that have less access to the internet and thus a smaller linguistic footprint online. The result is that AI-generated language will be homogenized, reflecting the practices of the richest countries and communities.

      [21] AI Nuances

    1. According to him, there are several goals connected to AI alignment that need to be addressed:

      [20] AI - Alignment Goals

    1. The following table lists the results that we visualized in the graphic.

      [18] AI - Increased sophistication

    1. Tagging and linking with AI (Napkin.one) by Nicole van der Hoeven

      https://www.youtube.com/watch?v=p2E3gRXiLYY

      Nicole underlines the value of a good user interface for traversing one's notes. She'd had issues with tagging things in Obsidian using their #tag functionality, but never with their [[WikiLink]] functionality. Something about the autotagging done by Napkin's artificial intelligence makes the process easier for her. Some of this may be down to how their user interface makes it easier/more intuitive as well as how it changes and presents related notes in succession.

      Most interesting however is the visual presentation of notes and tags in conjunction with an outliner for taking one's notes and composing a draft using drag and drop.

      Napkin as a visual layer over tooling like Obsidian, Logseq, et. al. would be a much more compelling choice for me in terms of taking my pre-existing data and doing something useful with it rather than just creating yet another digital copy of all my things (and potentially needing sync to keep them up to date).

      What is Napkin doing with all of their user's data?

  13. Apr 2023
    1. Abstract

      Recent innovations in artificial intelligence (AI) are raising new questions about how copyright law principles such as authorship, infringement, and fair use will apply to content created or used by AI. So-called “generative AI” computer programs—such as Open AI’s DALL-E 2 and ChatGPT programs, Stability AI’s Stable Diffusion program, and Midjourney’s self-titled program—are able to generate new images, texts, and other content (or “outputs”) in response to a user’s textual prompts (or “inputs”). These generative AI programs are “trained” to generate such works partly by exposing them to large quantities of existing works such as writings, photos, paintings, and other artworks. This Legal Sidebar explores questions that courts and the U.S. Copyright Office have begun to confront regarding whether the outputs of generative AI programs are entitled to copyright protection as well as how training and using these programs might infringe copyrights in other works.

    1. It was only by building an additional AI-powered safety mechanism that OpenAI would be able to rein in that harm, producing a chatbot suitable for everyday use.

      This isn't true. The Stochastic Parrots paper outlines other avenues for reining in the harms of language models like GPT's.

    1. The result of working with this technique for a long time is a kind of second memory, an alter ego with which you can always communicate. It has, similar to our own memory, no pre-planned comprehensive order, no hierarchy, and surely no linear structure like a book. And by that very fact, it is alive independently of its author. The entire note collection can only be described as a mess, but at least it is a mess with a non-arbitrary internal structure.

      Luhmann attributes (an independent) life to his zettelkasten. It is effectuated by internal branching, opportunities for links or connections, and a register as well as lack of pre-planned comprehensive order, lack of hierarchy, and lack of linear structure.

      Which of these is necessary for other types of "life"? Can any be removed? Compare with other systems.

  14. Mar 2023
    1. Bender, Emily M., Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜” In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–23. FAccT ’21. New York, NY, USA: Association for Computing Machinery, 2021. https://doi.org/10.1145/3442188.3445922.

      Would the argument here for stochastic parrots also potentially apply to or could it be abstracted to Markov monkeys?

    1. A.I. Is Mastering Language. Should We Trust What It Says?<br /> by Steven Johnson, art by Nikita Iziev

      Johnson does a good job of looking at the basic state of artificial intelligence and the history of large language models and specifically ChatGPT and asks some interesting ethical questions, but in a way which may not prompt any actual change.


      When we write about technology and the benefits and wealth it might bring, do we do too much ethics washing to help paper over the problems to help bring the bad things too easily to pass?

    2. We know from modern neuroscience that prediction is a core property of human intelligence. Perhaps the game of predict-the-next-word is what children unconsciously play when they are acquiring language themselves: listening to what initially seems to be a random stream of phonemes from the adults around them, gradually detecting patterns in that stream and testing those hypotheses by anticipating words as they are spoken. Perhaps that game is the initial scaffolding beneath all the complex forms of thinking that language makes possible.

      Is language acquisition a very complex method of pattern recognition?

    3. How do we make them ‘‘benefit humanity as a whole’’ when humanity itself can’t agree on basic facts, much less core ethics and civic values?
    4. Another way to widen the pool of stakeholders is for government regulators to get into the game, indirectly representing the will of a larger electorate through their interventions.

      This is certainly "a way", but history has shown, particularly in the United States, that government regulation is unlikely to get involved at all until it's far too late, if at all. Typically they're only regulating not only after maturity, but only when massive failure may cause issues for the wealthy and then the "regulation" is to bail them out.

      Suggesting this here is so pie-in-the sky that it only creates a false hope (hope washing?) for the powerless. Is this sort of hope washing a recurring part of

    5. OpenAI has not detailed in any concrete way who exactly will get to define what it means for A.I. to ‘‘benefit humanity as a whole.’’

      Who get's to make decisions?

    6. Whose values do we put through the A.G.I.? Who decides what it will do and not do? These will be some of the highest-stakes decisions that we’ve had to make collectively as a society.’’

      A similar set of questions might be asked of our political system. At present, the oligopolic nature of our electoral system is heavily biasing our direction as a country.

      We're heavily underrepresented on a huge number of axes.

      How would we change our voting and representation systems to better represent us?

    7. Should we build an A.G.I. that loves the Proud Boys, the spam artists, the Russian troll farms, the QAnon fabulists?

      What features would be design society towards? Stability? Freedom? Wealth? Tolerance?

      How might long term evolution work for societies that maximized for tolerance given Popper's paradox of tolerance?

    8. Right before we left our lunch, Sam Altman quoted a saying of Ilya Sutskever’s: ‘‘One thing that Ilya says — which I always think sounds a little bit tech-utopian, but it sticks in your memory — is, ‘It’s very important that we build an A.G.I. that loves humanity.’ ’’
    1. the apocalypse they refer to is not some kind of sci-fi takeover like Skynet, or whatever those researchers thought had a 10 percent chance of happening. They’re not predicting sentient evil robots. Instead, they warn of a world where the use of AI in a zillion different ways will cause chaos by allowing automated misinformation, throwing people out of work, and giving vast power to virtually anyone who wants to abuse it. The sin of the companies developing AI pell-mell is that they’re recklessly disseminating this mighty force.

      Not Skynet, but social disruption

    1. ChatGPTThis is a free research preview.🔬Our goal is to get external feedback in order to improve our systems and make them safer.🚨While we have safeguards in place, the system may occasionally generate incorrect or misleading information and produce offensive or biased content. It is not intended to give advice.
  15. Feb 2023
    1. Sam Matla talks about the collector's fallacy in a negative light, and for many/most, he might be right. But for some, collecting examples and evidence of particular things is crucially important. The key is to have some idea of what you're collecting and why.

      Historians collecting small facts over time may seem this way, but out of their collection can emerge patterns which otherwise would never have been seen.

      cf: Keith Thomas article

      concrete examples of this to show the opposite?

      Relationship to the idea of AI coming up with black box solutions via their own method of diffuse thinking

    1. Certainly, computerizationmight seem to resolve some of the limitations of systems like Deutsch’s, allowing forfull-text search or multiple tagging of individual data points, but an exchange of cardsfor bits only changes the method of recording, leaving behind the reality that one muststill determine what to catalogue, how to relate it to the whole, and the overarchingsystem.

      Despite the affordances of recording, searching, tagging made by computerized note taking systems, the problem still remains what to search for or collect and how to relate the smaller parts to the whole.


      customer relationship management vs. personal knowledge management (or perhaps more important knowledge relationship management, the relationship between individual facts to the overall whole) suggested by autocomplete on "knowl..."

    2. One might then say that Deutsch’s index devel-oped at the height of the pursuit of historical objectivity and constituted a tool ofhistorical research not particularly innovative or limited to him alone, given that the useof notecards was encouraged by so many figures, and it crystallized a positivistic meth-odology on its way out.

      Can zettelkasten be used for other than positivitistic methodologies?

    1. In his 1976 book, Computer Power and Human Reason: From Judgment to Calculation, the computer scientist Joseph Weizenbaum observed some interesting tendencies in his fellow humans. In one now-famous anecdote, he described his secretary’s early interactions with his program ELIZA, a proto-chatbot he created in 1966.

      Description of Joseph Weizenbaum's ELIZA program

      When rule-based artificial intelligence was the state-of-the-art.

    1. https://www.cyberneticforests.com/ai-images

      Critical Topics: AI Images is an undergraduate class delivered for Bradley University in Spring 2023. It is meant to provide an overview of the context of AI art making tools and connects media studies, new media art, and data ethics with current events and debates in AI and generative art. Students will learn to think critically about these tools by using them: understand what they are by making work that reflects the context and histories of the tools.

    1. Sloan, Robin. “Author’s Note.” Experimental fiction. Wordcraft Writers Workshop, November 2022. https://wordcraft-writers-workshop.appspot.com/stories/robin-sloan.

      brilliant!

    2. "I have affirmed the premise that the enemy can be so simple as a bundle of hate," said he. "What else? I have extinguished the light of a story utterly.

      How fitting that the amanuensis in a short story written with the help of artificial intelligence has done the opposite of what the author intended!

    1. Wordcraft Writers Workshop by Andy Coenen - PAIR, Daphne Ippolito - Brain Research Ann Yuan - PAIR, Sehmon Burnam - Magenta

      cross reference: ChatGPT

    2. LaMDA was not designed as a writing tool. LaMDA was explicitly trained to respond safely and sensibly to whomever it’s engaging with.
    3. LaMDA's safety features could also be limiting: Michelle Taransky found that "the software seemed very reluctant to generate people doing mean things". Models that generate toxic content are highly undesirable, but a literary world where no character is ever mean is unlikely to be interesting.
    4. A recurring theme in the authors’ feedback was that Wordcraft could not stick to a single narrative arc or writing direction.

      When does using an artificial intelligence-based writing tool make the writer an editor of the computer's output rather than the writer themself?

    5. If I were going to use an AI, I'd want to plugin and give massive priority to my commonplace book and personal notes followed by the materials I've read, watched, and listened to secondarily.

    6. Several participants noted the occasionally surreal quality of Wordcraft's suggestions.

      Wordcraft's hallucinations can create interesting and creatively surreal suggestions.

      How might one dial up or down the ability to hallucinate or create surrealism within an artificial intelligence used for thinking, writing, etc.?

    7. Writers struggled with the fickle nature of the system. They often spent a great deal of time wading through Wordcraft's suggestions before finding anything interesting enough to be useful. Even when writers struck gold, it proved challenging to consistently reproduce the behavior. Not surprisingly, writers who had spent time studying the technical underpinnings of large language models or who had worked with them before were better able to get the tool to do what they wanted.

      Because one may need to spend an inordinate amount of time filtering through potentially bad suggestions of artificial intelligence, the time and energy spent keeping a commonplace book or zettelkasten may pay off magnificently in the long run.