1,024 Matching Annotations
  1. Aug 2022
    1. Howard Rheingold on Tools for Thought

      Not just a category these days, also a 1985 book title by Howard. (html version of that book http://www.rheingold.com/texts/tft/ )

      Web archive url https://web.archive.org/web/20220815051435/https://theinformed.life/2022/08/14/episode-94-howard-rheingold/

    2. First, it taught me that there was a history to this stuff, and it also expanded the frontiers of what I understood I was doing

      'History of stuff' not being seen is a recurring pattern. e.g. wrt Luhmann vs commonplacing, in the Roam/Obsidian wave e.g. wrt open data around 2010 when there was little realisation of efforts by re-users to get to the PSI Directive, only the new wave of coders using the fact it existed.

      It's also a repeating pattern in generations. Open Space and unconferencing e.g. needs to be retaught with every new wave of people. The open web of two decades ago needs to be explained to those now starting their professional work using online tools.

      Spaced repetition for groups/society?

      In order to expand understanding what one is actually doing / building on.

      Doet me denken aan die '90s exchange student die me ooit vroeg of ik geschiedenis studeerde ipv elektro: ik legde bij alles ook het ontwikkelingspad uit.

    1. a large neural network that has been trained by reading the internet, trying to predict what the next word will be. This might not sound particularly useful. But it turns out the class of problems that can be reformulated as text predictions is vast.

      not the entire internet I think? The playground provides only English answers, the examples I've asked the script of certain things had a singular focus on US examples. And when discussing a popular book that is only available in Dutch it clearly has no actual information to work with, despite the script first boasting that it studied Dutch literature in Amsterdam in the 2000's :D GPT-3's Playground is anglo-centric at least.

    2. To access GPT-3, you set up an account at OpenAI. Then you click on Playground, which brings you to this workspace:

      did that. Playing with it is highly fascinating. Saving some conversations as examples.

    3. I think the skill involved will be similar to being a good improv partner, that’s what it reminds me of.

      that sounds like a useful analogy. Prompting like you are the algo's improv partner. The flipside seems to be the impact the author himself is after: being prompted along new lines of inquiry, making the script your improv partner in return.

    4. GPT-3 is by no means a reliable source of knowledge. What it says is nonsense more often than not! Like the demon in The Exorcist, language models only adds enough truth to twist our minds and make us do stupid things

      The need to be aware that GPT-3 is a text generation tool, not an accurate search engine. However being factually correct is not a prerequisite of experiencing surprisal. The author uses the tool to open up new lines of thought, so his prompt engineering in a way is aimed at being prompted himself. This is reminiscent of how Luhmann talks about communicating with his index cards: the need for factuality does not reside with the card, meaning is (re)constructed in the act of communication. The locus of meaning is the conversation, the impact it has on oneself, less the content, it seems.

    5. I’ve talked to people who prompt GPT-3 to give them legal advice and diagnose their illnesses (for an example of how this looks, see this footnote1). I’ve talked to men who let their five-year-olds hang out with GPT-3, treating it as an eternally patient uncle, answering questions, while dad gets on with work.

      The essay gives various examples of usage: legal advice medical diagnosis nanny to talk to your kid a research assistant, prompting it for surprisal basically to come up with lines of inquiry an questions let the algo impersonate someone and run ideas by that impersonation let the algo impersonate opposing debate partners list possible counterarguments draw analogies between knowledge domains

    6. augment human intelligence

      Doug Engelbart overtones

    7. a new interface for the internet.

      GPT-3 is a way to approach the information on the internet, an interface for the internet. This I associate with the aspects of distributedness: apps are dataviewers (like Obsidian.md is), interfaces are queries on that data.

    8. A blog post is a very long and complex search query to find fascinating people and make them route interesting stuff to your inbox.

      This phrasing imo instrumentalises those fascinating people you find. Interesting stuff is a byproduct of interacting with those fascinating people, a result from fascinating conversation, a residue of the construct you've built together in conversation.

    9. Therefore, it is intriguing to realize what I am doing is, in fact, prompt engineering.Prompt engineering is the term AI researchers use for the art of writing prompts that make a large language model output what you want. Instead of directly formulating what you want the program to do, you input a string of words to tickle the program in such a way it outputs what you are looking for. You ask a question, or you start an essay, and then you prompt the program to react, to finish what you started.

      I take to the term prompt engineering. Designing prompts is important in narrative research, just as much as in AI, and in e.g. workshop settings. It's definitely a skill. Conversational prompts describes blog posts too.

    10. When I’m writing this, from March through August 2022

      The author took time over a period of 5 months to put this essay together. That's impressive, in terms of effort put in and in terms of tenacity. How much of this time is to 'hold questions' as Johnnie Moore would say, to develop your thoughts iteratively. Could it have been done in index cards, under the radar, with the essay then a smaller effort, reduced to collating those index cards?

    11. When I’ve been doing this with GPT-3, a 175 billion parameter language model, it has been uncanny how much it reminds me of blogging

      This is intriguing, seeing a similar return on prompting GPT-3 as from blogging. After reading this essay the first time, I played with GPT-3 myself, and even from a first attempt it is clear what he means. It feels like a similar process, prompting GPT-3 and pushing a notion, bookmark or question into my blog's feed. The first reactions on both types bring similar levels of surprial. What is however missing from GPT-3 in comparison with my blog is that blog networks are more than a 1-on-1 prompt and respons. They form larger feedback loops, which in turn lifts signals above the noise.

    12. https://web.archive.org/web/20220810205211/https://escapingflatland.substack.com/p/gpt-3

      Blogged a few first associations at https://www.zylstra.org/blog/2022/08/communicating-with-gpt-3/ . Prompt design for narrative research may be a useful experience here. 'Interviewing' GPT-3 a Luhmann-style conversation with a system? Can we ditch our notes for GPT-3? GPT-3 as interface to the internet. Fascinatiing essay, need to explore.

    1. Like most things in life, the answer is a complicated balance. And you have to find your way and find your balance, which isn’t easy no matter who you are or what you do. After two years of trauma, I’m going to crack on loads more. Make some new memories, new good times, which in the future I’ll be able to look back on as part of my nostalgia. Just have to find that tricky balance.

      Ruben is quoting Geoff Marshall in a video here. I recognise what Ruben says about his mental health, the melancholic funk, both from myself and E. Sometimes the current months are harder than when the pandemic first hit. Things seem normal, except they aren't. Geoff suggests adding new experiences now, so they become part of his future nostalgia, as a counterbalance to the past two years. Not pushing stuff away but balancing it. Reminds me a bit of what I used to say about 'hiding' unwanted Google results: publish more online so that it balances out and the unwanted things aren't the dominant search results.

    1. any animal equivalent is going to have to need oxygen — a lot of it.

      does this still hold up? the wikipedia lemma points to several more recent sources that seem to offer counter information https://en.wikipedia.org/wiki/Rare_Earth_hypothesis#Free_oxygen_may_be_neither_rare_nor_a_prerequisite_for_multicellular_life The source of the claim seems to be the book itself, so would need to look it up in Ward Brownlee 2000, page 217

    2. have really rapidly moving creatures and rapidly thinking creatures, which is a form of movement

      Ward in the context of Rare Earth hypothesis says this. An intriguing notion seeing thinking as a form of (deliberate) movement. How is this meant / to be understood? Chemically, in terms of what it requires in an animal (ie us)? The remark is made in the context of the need for oxygen for complex life to be possible. (based on David Catling Univ of Washington) after all. Or environmentally/contextually, as both deliberate physical movement and brain activity are response to outside impulses?

    1. This article is the first in a four-part series, where we will look deeper into the relationship between data mesh and privacy. The series will cover:   How a data mesh architecture can support better data privacy controls.  How to shift from a centralized governance model to a federated approach.  How to focus on automation as a cornerstone of your governance strategy. How to bake privacy tech into your self-service platform approach.

      when will the other parts be published?

    2. The data mesh paradigm allows us to see data in a new way

      moet het boek Data Mesh nog lezen (https://www.oreilly.com/library/view/data-mesh/9781492092384/) maar lijkt goed aan te sluiten bij de gedachten achter de EU Dataspaces.

    3. Privacy-first data via data mesh
    1. I consider this a Public Domain image as the image does not pass the ‘creativity involved’ threshold which generally presupposes a human creator, for copyright to apply (meaning neither AI nor macaques).

      I say this, but there's a nuance to consider. I read a post by someone creating their company logo with Dall-E by repeatedly changing and tweaking their prompt to get to a usable output. That is definitely above the creativity threshold, with the AI as a tool, not as the creator. Similarly, NLP AI tools can help authors to get to e.g. a first draft, then shaped, rewritten, changed, edited etc., which crosses the human creativity threshold for copyright to kick in. Compare with how I sometimes use machine translation of my own text and then clean it up, to be able to write faster in e.g. German of French, where the algo is a lever to turn my higher passive language skills into active language use. (Btw comment added to see if that updates my original hypothesis annotations of this article in my Obsidian notes, or if it happens only once when first annotated. The latter would mean forcing annotation and thus break my workflow)

  2. Jul 2022
  3. Jun 2022
    1. the classic idea of blogging as thinking out loud, but here with others.

      Alan pointed to the same notion elsewhere. Blogging should be more about open ended curiosity and holding questionsm than about explaining or sharing ones coherent worldview or current truth about something. This with an eye to the former being a better prompt for conversations. I agree that conversations (distributed ones, taking place over multiple blogs) are a key thing in blogging. I also believe in the 'obligation to explain' as ruk.ca says: if you have figured something out, created something, you have a civic duty to explain it so others may find their way to their solution faster. (this annotation is also meant as a test to see how it ends up in hypothes.is and gets sync'd or not to my notes locally.

  4. Apr 2022
    1. Something else that came out of this research is the fact that the length of company’s lives is shrinking at almost one year per year. In 01950, the average company on the Fortune 500 had been around for 61 years. Now it’s 18 years. Companies’ lives are getting shorter.

      I recognise the statistic, but the conclusion that companies lives are getting shorter doesn't follow without further evidence. There are def many more companies than before (more population plus increased digitisation and mobilithy = more companies), so the bulk of existing companies is younger than before. Some will be successful enough to be a Fortune 500 faster than before, driving down the average age of companies in that list. It doesn't mean that every company dropping out of the Fortune 500 ceases to exist. It may continue to exist in the exact same way as the longest living companies mentioned elsewhere in the article. In other words, they may be doing what the article is counseling to do, turning this factoid into the opposite of the argument it is now used as. In short: you can't say this unless you have data about the discontinued companies, both 'nowadays' and in previous 2-3 centures.