9 Matching Annotations
  1. Feb 2024
    1. [[Lee Bryant]] links to this overview by Simon Willison of what happened in #2023/ in #AI . Some good pointers wrt [[ChatPKM myself]] dig those out.

  2. May 2023
    1. Making these models smaller and more specialised would also allow us to run them on local devices instead of relying on access via large corporations.

      this. Vgl [[CPUs, GPUs, and Now AI Chips]] hardware with ai on them. Vgl [[Everymans Allemans AI 20190807141523]]

    2. One alternate approach is to start with our own curated datasets we trust. These could be repositories of published scientific papers, our own personal notes, or public databases like Wikipedia.We can then run many small specialised model tasks over them.

      Yes, if I could run my own notes of 3 decades or so on an LLM locally (where it doesn't feed the general model), that I would do instantly.

    3. We will have to design this very carefully, or it'll give a whole new meaning to filter bubbles.

      Not just bubble, it will be the FB timeline. Key here is agency, and design for human biases. A model is likely much better than I to manage the diversity of sources for me, if I give it a starting point myself, or to see which outliers to include etc. Again I think it also means moving away from single artefacts. Often I'm not interested in what everyone is saying about X, but am interested in who is talking about X. Patterns not singular artefacts. See [[Mijn ideale feedreader 20180703063626]]

    4. I expect these to be baked into browsers or at the OS level.These specialised models will help us identify generated content (if possible), debunk claims, flag misinformation, hunt down sources for us, curate and suggest content, and ideally solve our discovery and search problems.

      Appleton suggests agents to fact check / filter / summarise / curate and suggest (those last two are more personal than the others, which are the grunt work of infostrats) would become part of your browser. Only if I can myself strongly influence what it does (otherwise it is the FB timeline all over again!)

      If these models become part of the browser, do we still need the browser as a metaphor for a window on the web, or surfing the net? Why wouldn't those models come up with whatever they grabbed from the web/net/darkweb in the right spot in my own infostrats? The browser is itself not a part of my infostrats, it's the starting point of it, the viewer on the raw material. Whatever I keep from browsing is when PKM starts. When the model filters / curates why not put that in the right spots for me to start working with it / on it / processing it? The model not as part of the browser, but doing the actual browsing, an active agent going out there to flag patterns of interest (based on my prefs/current issues etc) and organising it for me for my next steps? [[Individuele software agents 20200402151419]]

    5. Those were all a bit negative but there is some hope in this future.We can certainly fight fire with fire.I think it’s reasonable to assume we’ll each have a set of personal language models helping us filter and manage information on the web

      Yes, agency at the edges. Ppl running their own agents. Have your agents talk to my agents to arrange a meeting etc. That actually frees up time. Have my agent check out the context and background of a text to judge whether it's a human author or not etc. [[Persoonlijke algoritmes als agents 20180417200200]] [[Individuele software agents 20200402151419]]

    6. But some people will realise they shouldn’t be letting language models literally write words for them. Instead, they'll strategically use them as part of their process to become even better writers.They'll integrate them by using them as sounding boards while developing ideas, research helpers, organisers, debate partners, and Socratic questioners.

      This hints towards prompt-engineering, and the role of prompts in human interaction itself [[Prompting skill in conversation and AI chat 20230301120740]]

      High Q use of generative AI will be about where in a creative / work process you employ to what purpose. Not in accepting the current face presented to us in e.g. chatGPT: give me an input and I'll give you an output. This in turn requires an understanding of one's own creative work processes, and where tools can help reduce friction (and where the friction is the cognitive actual work and must not be taken out)

  3. Mar 2023
    1. https://web.archive.org/web/20230316103739/https://subconscious.substack.com/p/everyone-will-have-their-own-ai

      Vgl [[Onderzoek selfhosting AI tools 20230128101556]] en [[Persoonlijke algoritmes als agents 20180417200200]] en [[Everymans Allemans AI 20190807141523]] en [[AI personal assistants 20201011124147]]

  4. Jan 2023
    1. I don't presently have plans to expand this into an annotation extension, as I believe that purpose is served by Hypothesis. For now, I see this extension as a useful way for me to save highlights, share specific pieces of information on my website, and enable other people to do the same.

      I wonder if it uses the W3C recommendation for highlighting and annotation though? Which would allow it to interact with other highlighting/annotation results.

      To me highlighting is annotation, though a leightweight form, as the decision to highlight is interacting with the text in a meaningful way. And the pop up box actually says Annotation right there in the screenshot, so I don't fully grasp what distinction James is making here.