45 Matching Annotations
  1. Aug 2021
  2. universalprior.substack.com universalprior.substack.com
    1. recommend.

      I am not sure where to put this, but another big question is about costs for training and running this thing. If you don't want to answer now, you can say explicitly it will be in a future post... because a lot of people will ask for sure.

    2. I’ll publish another post with all the boring technical details and code snippets in case someone wants to try the same.

      Maybe you should say this explicitly in the text? It's like you have the results section but not the methods section, you should stay it will be elsewhere.

    3. Probably the closest you can currently get to talk to an alien mind.

      This clashes a bit with "#IAN does not have a brain", the section title. In fact that title is a bit disjoint from the section content, which seems to be more about privacy concerns. Whereas the theme of AGI and associated risks is considered above.

      ... oh damn, I just noticed #IAN did the titles... maybe cherry-pick again? 😇

    4. , the last bits are the deepest,

      😢 I don't want to read a full blog post just to understand what you mean with this cryptic sentence. Can you also rephrase it?

    5. most striking experience is

      If you charge the reader with the "most striking experience" you must deliver! From the wording seems like that it was striking for you to re-read and examine your own text. But the point here is that you examined the AI outputs and you noticed those invariants and recurring themes into it, which surprised you because you weren't fully aware of them yourself. Like a world-cloud on steroids.

    6. Also this

      Maybe this deserves a footnote, and some explanation on what it is. Because I felt curious, and it took some time for me to realize it is a short story, then I though I need to read it fully to understand the point you are making... totally breaks the flow.

      Instead you can simply say in a footnote "there is also this very cool sci-fi short story about it... etc"

    7. would cross this line for me.

      I think you are too indirect here about the main point you make: that is, it is for private use, or fully "supervised" use. Not for someone to play with it unguarded.

      Basically a similar level of privacy you have for the source material.

      I think there is some work out there where they show it is possible to extract information on the learned content from language models.

    8. idea like this.

      Well, there is also the fact that this is not a simplified step-by-step tutorial on how to do it. And one can find plenty of tutorials by googling, as you did

    9. resigned despair

      ok, also given the footnote, I really do not understand why there is despair or why they hate it. It's confusing without contest... are they just AI-luddites ?

    10. I imagine a routine as follows:

      This part feels a bit too technical :-( you don't really explain how you do the training in the first place. If you want to keep this part, maybe you should explain in a few sentences how this method works in general.

    11. It would likely also come with higher hardware requirements, to the point where finetuning is not feasible even with TRC access. On the other hand, compute gets cheaper every year, so perhaps it will become feasible in a few years.

      You can refine a bit the though. On one hand the compute requirement grow, on the other the compute capacity increases over time. If the two counterbalance, the cost for improved models will stay about the same, etc...

    12. For the next time where I have a continuous stretch of free time to work on improving #IAN, I have a few things I'd love to try.

      This sentence is a bit convoluted, but I feel too lazy to suggest improvements

    13. embrace the weirdness and the otherworldly quirkiness.

      Maybe this should go at the beginning of the question? Because at first it is not entirely clear... Now I get it: Point 1. cherrypick results . Point 2 . use better prompts Point 3. do nothing and embrace the quirkiness.

      This is not really "changing the ground truth", right?

    14. ney for the next 10 years:

      I am curious here: the prompt is in italics, and the model just created the list as we see below? I'd expect it would write paragraphs more often than not. And the bold fonts have been added by you, right? Any cheerypicking?

    15. I believe the second option has been under-explored.

      Reminds me of the comedy sketch of the person using the restroom for people with handicap (or the similarly reserved parking lot). However a lot of people are staring at him, or knocking at the door, so he leaves the restroom/the car dragging his leg and pretending to have a handicap.

    16. to reach that level of quality

      This confuses me a bit, because it's as if there is a single "objective function" that we can improve on. But how are we sure of it? Let's say 10% of the times the output is of my liking. I can train the network more, and still get a 10% of my liking, but with better grammar and more consistent reasoning.<br> It really depends on what I am improving upon, and if it is possible given the model limitations.

    17. er there are additional strategies that have been proposed

      this can be straightened into "additional strategies have been proposed to improve model output" "further" is redundant (I remember Adam Kohn killing a lot of these things from the draft of my publications)

    18. ~50Mb in total

      Well, what we really want to know as readers is the word count of your corpus. The size in MB might as well depend on the character encoding.

    19. not faster than just reading the text

      Here you mean: "reading the answer of your questions from Feynman's textbooks?" ? But a big difference here is that the AI isn't guaranteed to get it right, and does not even offer a full perspective on what is on the textbooks and what isn't. Imagine I ask about quantum computing, the Feynmann AI would give an answer that isn't really part of his work.

    20. to

      it's clearer without the verb at the end. Couple be something like:<br> New Emacs packages promise to apply well-engineered...

      Also, what it means to "apply a contextual prompt to arbitrary text"? I though that given arbitrary text, you can then use it as a prompt for an AI of your choice with a standardized and simplified interface. Something like that.

  3. Dec 2018
  4. Jul 2018
  5. Sep 2017