- May 2024
-
localhost:8080 localhost:8080
-
WORKS OF PLATO
TEST
-
- Oct 2023
-
www.onceuponachef.com www.onceuponachef.com
-
Came out pretty good ; missing a certain 'je ne sais quoi'. Quite fast though! Started cooking ~5:26, finished 6:45, so 1h20 total -- wow and the recipe listed 1h15!!! I did it in time for once!! 😁
Would've liked a bit more browning on the veg though. Roasting is really the only way to get that kind of flavour.
-
- Aug 2023
-
simonwillison.net simonwillison.net
-
This is the OpenAI API call for embeddings—you send it text, it returns those floating point numbers. It’s incredibly cheap. Embedding everything on my site—400,000 tokens, which is about 300,000 words or the length of two novels—cost me 4 cents.
Oh I didn't realize OpenAI had an API for this! I wonder how you can take the output and toss it into a vector db for use llama index or something
-
The trick instead is to take the user’s question, search for relevant documents using a regular search engine or a fancy vector search engine, pull back as much relevant information as will fit into that 4,000 or 8,000 token limit, add the user’s question at the bottom and ask the language model to reply.
This is the core of how LlamaIndex works. It's a very powerful tool!
-
A simple Python implementation of the ReAct pattern for LLMs.
-
It’s called the reAct paper, and it describes another one of these prompt engineering tricks. You tell a language model that it has the ability to run tools, like a Google search, or to use a calculator. If it wants to run them, it says what it needs and then stops. Then your code runs that tool and pastes the result back into the model for it to continue processing.
I use this approximate pattern a lot, I didn't realize it had a name and a paper! Need to check that out.
-
What this adds up to is that these language models make me more ambitious with the projects that I’m willing to take on. It used to be that I’d think of a project and think, “You know, that’s going to take me two or three hours of figuring out, and I haven’t got two or three hours, and so I just won’t do that.” But now I can think, “Okay, but if ChatGPT figures out some of the details for me, maybe it can do it in half an hour. And if I can do it in half an hour, I can justify it.”
100% ! It's one of my favourite things about tools like Copilot!
-
There’s this idea of T-shaped people: having a bunch of general knowledge and then deep expertise in a single thing. The upgrade from that is when you’re pi-shaped (actually a real term)—you have expertise in two areas. I think language models give us all the opportunity to become comb-shaped. We can pick a whole bunch of different things and accelerate our understanding of them using these tools to the point that, while we may not be experts, we can act like experts.
Very much strong agree. These LLMs let us translate our expertise in our domain to other domains.
-
LLMs have started to make me redefine what I consider to be expertise. I’ve been using Git for 15 years, but I couldn’t tell you what most of the options in Git do. I always felt like that meant I was just a Git user, but nowhere near being a Git expert. Now I use sophisticated Git options all the time, because ChatGPT knows them and I can prompt it to tell me what to do.
Same ; I love the fact it lets me quickly learn about and use more advanced/obscure features of the tools I use.
-
We call on the field to recognize that applications that aim to believably mimic humans bring risk of extreme harm. Work on synthetic human behavior is a bright line in ethical AI development. This has been ignored by essentially everyone! These chatbots are imitating humans, using “I” pronouns, even talking about their opinions.
My guess would be the paper is talking about the risk is AGI, not about the wording the LLM uses. I.e. mimicking humans intellectually. But haven't read the paper.
-
Code-wise, I will never commit code if I can’t both understand and explain every line of the code that I’m committing. Occasionally, it’ll spit out quite a detailed solution to a coding problem I have that clearly works because I can run the code. But I won’t commit that code until I’ve at least broken it down and made sure that I fully understand it and could explain it to somebody else.
This is super important, especially for people who aren't very comfortable with coding or who are learning.
-
I read academic papers now. I never used to because I found them so infuriating—because they would throw 15 pieces of jargon at you that you didn’t understand and you’d have do half an hour background reading just to be able to understand them.
Love this use case!
-
The question I always ask myself is: Could my friend who just read the Wikipedia article about this answer my question about this topic? All of these models been trained on Wikipedia, plus Wikipedia represents a sort of baseline of a level of knowledge which is widely enough agreed upon around the world that the model has probably seen enough things that agree that it’ll be able to answer those questions.
Yes and no, I think this works for the LLMs that search the internet; e.g. Bing. These LLMs tend to perform a quick search on the subject and generally have slightly shallower responses because they primarily base their answers on the small sample they found in search results. A better metaphor is "Would my friend who spends 10 minutes performing a few Google searches and reading articles be able to answer my question?"
LLMs are trained on way more than just Wikipedia; for LLMs without search access a better metaphor is "Would my friend who spent a year reading everything they could find on this topic be able to answer my question about this topic?" They tend to have much deeper knowledge on the subject.
-
The obvious question then is how on earth do these things even work? Genuinely all these things are doing is predicting the next word in the sentence. That’s the whole trick. If you’ve used an iPhone keyboard, you’ve seen this. I type “I enjoy eating,” and my iPhone suggests that the next word I might want to enter is “breakfast”.
I'm not an expert on this, but I kind of wish people would slow down on this metaphor. There's some truth to it, but LLMs aren't just predicting 'words', they're auto-completing at a significantly more abstract level. They're predicting really abstract concepts and then embedding those as words.
-
There’s a thing called the system prompt, where you can provide an additiona prompt that tells it how it should behave. I can run the same prompt with a system prompt that says “You are a poet”—and it writes a poem!
Again, a good pragmatic trick that seems to result in better response.
-
This paper showed that if you give a logic puzzle to a language model, it gets it wrong. But if you give it the same puzzle and then say, “let’s think step by step”, it’ll get it right. Because it will think out loud, and get to the right answer way more often.
These tricks really do help when working on practical applications of these tools.
Tags
Annotators
URL
-
-
ia903103.us.archive.org ia903103.us.archive.org
-
They were plotting to rob Squire Trueman,
Not again!
-
-
-
They were plotting to rob Squire Trueman
Oh no!
-
- Dec 2022
-
torontojournal.com torontojournal.com
-
I'm reading this beautiful story with AI. I asked ChatGPT to read the beginning of the story and give me context. And it told me about Gabriel's background. And it told me Seppo wasn't a known archangel but original to this story. I asked Dalle to generate images for the passage describing Seppo's workshop. And they were beautiful. I couldn't include the entire paragraph because Dalle has a low character limit. So I asked ChatGPT to create a Dalle prompt from the paragraph. And it created further beautiful art.
This in the background while I'm reading about a city of angels. Of individuals of infinite power and time. And because of these AI tools, I feel closer to them. Or maybe the AI is the angel -- an all knowing angel. And like Gabriel asking Seppo to create the impossible, so can I ask the AI. Or maybe like Seppo, I can now create the impossible with the help of these AI.
I'm not sure. But God and the celestial suddenly feel much closer. We can now create things at a rate like never before. Perhaps that is also why I feel closer. Our lives are now extended by these tools, because we can achieve projects which would have previously taken lifetimes.
And yet, a part of me is frightened to share ChatGPT with my friends and family. With those who create. How will they respond to seeing that same act of creation arise from an AI? Perhaps at comparable or likely greater quality? Will it be discouraging? Will they need to shift their definition of what gives their lives meaning? Will they be able to? How would Seppo respond to being told making prophecies is suddenly significantly easier and can be done by many more people?
What do I feel? I feel at times liberated by its power. Awed by its ability. At times suddenly lost. It feels like soon it will be able to do so much more than me. I will have significantly more time to do... What? Thankfully I have always been obsessed with original creation. As a personal value even if it never provides value to others. And maybe my ideas will still be original in the light of AI. So I guess I would still create? Or maybe I would make more time for the people in my life. Double down on being human. Find meaning through my relationships. Huh. Even now I'm reiterating a sentence ChatGPT generated when I asked it a few days ago what it means to be happy; where it says some people find happiness in their relationships.
-
“I like to have faith that mankind still believes,” Gabriel said. “Not the ones you want to believe,” Seppo muttered.
Chills
-