1 Matching Annotations
- May 2023
-
maggieappleton.com maggieappleton.com
-
Recently, people have been developing more sophisticated methods of prompting language models, such as "prompt chaining" or composition.Ought has been researching this for a few years. Recently released libraries like LangChain make it much easier to do.This approach solves many of the weaknesses of language models, such as a lack of knowledge of recent events, inaccuracy, difficulty with mathematics, lack of long-term memory, and their inability to interact with the rest of our digital systems.Prompt chaining is a way of setting up a language model to mimic a reasoning loop in combination with external tools.You give it a goal to achieve, and then the model loops through a set of steps: it observes and reflects on what it knows so far and then decides on a course of action. It can pick from a set of tools to help solve the problem, such as searching the web, writing and running code, querying a database, using a calculator, hitting an API, connecting to Zapier or IFTTT, etc.After each action, the model reflects on what it's learned and then picks another action, continuing the loop until it arrives at the final output.This gives us much more sophisticated answers than a single language model call, making them more accurate and able to do more complex tasks.This mimics a very basic version of how humans reason. It's similar to the OODA loop (Observe, Orient, Decide, Act).
Prompt chaining is when you iterate through multiple steps from an input to a final result, where the output of intermediate steps is input for the next. This is what AutoGPT does too. Appleton's employer Ought is working in this area too. https://www.zylstra.org/blog/2023/05/playing-with-autogpt/
Tags
Annotators
URL
-