770 Matching Annotations
  1. Last 7 days
    1. Broderick makes a more important point: AI search is about summarizing web results so you don't have to click links and read the pages yourself. If that's the future of the web, who the fuck is going to write those pages that the summarizer summarizes? What is the incentive, the business-model, the rational explanation for predicting a world in which millions of us go on writing web-pages, when the gatekeepers to the web have promised to rig the game so that no one will ever visit those pages, or read what we've written there, or even know it was us who wrote the underlying material the summarizer just summarized? If we stop writing the web, AIs will have to summarize each other, forming an inhuman centipede of botshit-ingestion. This is bad news, because there's pretty solid mathematical evidence that training a bot on botshit makes it absolutely useless. Or, as the authors of the paper – including the eminent cryptographer Ross Anderson – put it, "using model-generated content in training causes irreversible defects"

      Broderick: https://www.garbageday.email/p/ai-search-doomsday-cult, Anderson: https://arxiv.org/abs/2305.17493

      AI search hides the authors of the material it presents, summarising it is abstracting away the authors. It doesn't bring readers to those authors, it just presents a summary to the searcher as end result. Take it or leave it. At the same time, if one searches for something you know about, you see those summaries are always of. Leaving you guessing how of it is when searching something you don't know about. Search should never be the endpoint, always a starting point. I think that is my main aversion against AI search tools. Despite those clamoring 'it will get better over time' I don't think it will easily because the tool nor its makers have any interest in the quality of output necessarily and definitely can't assess it. So what's next, humans factchecking AI output. Why not prevent bs at its source? Nice ref to Maggie Appleton's centipede metaphor in [[The Expanding Dark Forest and Generative AI]]

  2. Feb 2024
    1. I major in English literature in University I major in interpreting you went to gr for my master degree I think more than 100 people applied and eight students were admit how many people finally passed the final exam two two two wow




    1. Constructing Prompts for the Command Model Techniques for constructing prompts for the Command model. Developers
    1. Now, let’s modify the prompt by adding a few examples of how we expect the output to be. Pythonuser_input = "Send a message to Alison to ask if she can pick me up tonight to go to the concert together" prompt=f"""Turn the following message to a virtual assistant into the correct action: Message: Ask my aunt if she can go to the JDRF Walk with me October 6th Action: can you go to the jdrf walk with me october 6th Message: Ask Eliza what should I bring to the wedding tomorrow Action: what should I bring to the wedding tomorrow Message: Send message to supervisor that I am sick and will not be in today Action: I am sick and will not be in today Message: {user_input}""" response = generate_text(prompt, temp=0) print(response) This time, the style of the response is exactly how we want it. Can you pick me up tonight to go to the concert together?
    2. But we can also get the model to generate responses in a certain format. Let’s look at a couple of them: markdown tables
    3. And here’s the same request to the model, this time with the product description of the product added as context. Pythoncontext = """Think back to the last time you were working without any distractions in the office. That's right...I bet it's been a while. \ With the newly improved CO-1T noise-cancelling Bluetooth headphones, you can work in peace all day. Designed in partnership with \ software developers who work around the mayhem of tech startups, these headphones are finally the break you've been waiting for. With \ fast charging capacity and wireless Bluetooth connectivity, the CO-1T is the easy breezy way to get through your day without being \ overwhelmed by the chaos of the world.""" user_input = "What are the key features of the CO-1T wireless headphone" prompt = f"""{context} Given the information above, answer this question: {user_input}""" response = generate_text(prompt, temp=0) print(response) Now, the model accurately lists the features of the model. The answer is: The CO-1T wireless headphones are designed to be noise-canceling and Bluetooth-enabled. They are also designed to be fast charging and have wireless Bluetooth connectivity. Format
    4. While LLMs excel in text generation tasks, they struggle in context-aware scenarios. Here’s an example. If you were to ask the model for the top qualities to look for in wireless headphones, it will duly generate a solid list of points. But if you were to ask it for the top qualities of the CO-1T headphone, it will not be able to provide an accurate response because it doesn’t know about it (CO-1T is a hypothetical product we just made up for illustration purposes). In real applications, being able to add context to a prompt is key because this is what enables personalized generative AI for a team or company. It makes many use cases possible, such as intelligent assistants, customer support, and productivity tools, that retrieve the right information from a wide range of sources and add it to the prompt.
    5. We set a default temperature value of 0, which nudges the response to be more predictable and less random. Throughout this chapter, you’ll see different temperature values being used in different situations. Increasing the temperature value tells the model to generate less predictable responses and instead be more “creative.”
    1. T. Herlau, "Moral Reinforcement Learning Using Actual Causation," 2022 2nd International Conference on Computer, Control and Robotics (ICCCR), Shanghai, China, 2022, pp. 179-185, doi: 10.1109/ICCCR54399.2022.9790262. keywords: {Digital control;Ethics;Costs;Philosophical considerations;Toy manufacturing industry;Reinforcement learning;Forestry;Causality;Reinforcement learning;Actual Causation;Ethical reinforcement learning}

    1. This technical report focuses on (1) our method for turning visual data of all types into a unified representation that enables large-scale training of generative models, and (2) qualitative evaluation of Sora’s capabilities and limitations. Model and implementation details are not included in this report.

      AI to generate video images.

    1. [[Lee Bryant]] links to this overview by Simon Willison of what happened in #2023/ in #AI . Some good pointers wrt [[ChatPKM myself]] dig those out.

    1. Oh, compliance moats are definitely real – think of the calls for AI companies to license their training data. AI companies can easily do this – they'll just buy training data from giant media companies – the very same companies that hope to use models to replace creative workers with algorithms. Create a new copyright over training data won't eliminate AI – it'll just confine AI to the largest, best capitalized companies, who will gladly provide tools to corporations hoping to fire their workforces: https://pluralistic.net/2023/02/09/ai-monkeys-paw/#bullied-schoolkids

      Concentration of power.

  3. Jan 2024
    1. But I maintain that all of this is a monumental and dangerous waste of human talent and energy. Imagine what might be accomplished if this talent and energy were turned to philosophy, to theology, to the arts, to imaginative literature or to education? Who knows what we could learn from such people - perhaps why there are wars, and hunger, and homelessness and mental illness and anger

      nice case ofr liberal education

    1. External Resources

      Resource Collections

      AI in Education Resource Directory

      This document contains AI resources of interest to instructors in higher education including tools, readings and videos, presentations, links to AI policies and a resource spreadsheet. The document is managed by Daniel Stanford (SCAD) and contributed to by the AI in Education Google Group.

      Courses and Tutorials

      Prompt Engineering for ChatGPT

      This popular six-module course provides basic instruction in how to work with large language models and how to create complex prompt-based applications for use in education scenarios. Dr. Jules White (Vanderbilt) is the instructor for the course. Absolute beginners to experienced users of large language models will find helpful guidance on designing prompts and using patterns.

      AI Checker Resources

      Michael Coley, Guidance on AI Detection and Why We’re Disabling Turnitin’s AI Detector, Vanderbilt University, https://www.vanderbilt.edu/brightspace/2023/08/16/guidance-on-ai-detection-and-why-were-disabling-turnitins-ai-detector/ (last visited Sep 25, 2023).

      In August, 2023, Vanderbilt's Center for Teaching and Learning provided an explanation of the university's decision to disable Turnitin's AI detection tool. Other universities, such as the University of Pittsburgh, have provided comparable statements about AI writing detection. Vanderbilt noted that AI detection was a difficult or impossible task for technology to solve and will become more difficult as AI tools become more common and advanced. The articles below describe some of the technical challenges with AI detection and unintended effects (e.g., bias against non-native English writers).

      1. Vinu Sankar Sadasivan et al., Can AI-Generated Text Be Reliably Detected?, (2023), http://arxiv.org/abs/2303.11156 (last visited Oct 26, 2023).
      2. Andrew Myers, AI-Detectors Biased Against Non-Native English Writers, Stanford HAI (2023), https://hai.stanford.edu/news/ai-detectors-biased-against-non-native-english-writers (last visited Sep 25, 2023).
      3. Susan D’Agostino, Turnitin’s AI Detector: Higher-Than-Expected False Positives, Inside Higher Ed (2023), https://www.insidehighered.com/news/quick-takes/2023/06/01/turnitins-ai-detector-higher-expected-false-positives (last visited Sep 25, 2023).
      4. Geoffrey A. Fowler, Analysis | We Tested a New ChatGPT-Detector for Teachers. It Flagged an Innocent Student., Washington Post, Apr. 14, 2023, https://www.washingtonpost.com/technology/2023/04/01/chatgpt-cheating-detection-turnitin/ (last visited Sep 25, 2023).
      5. Michael Webb, AI Detection - Latest Recommendations, National centre for AI (Sep. 18, 2023), https://nationalcentreforai.jiscinvolve.org/wp/2023/09/18/ai-detection-latest-recommendations/ (last visited Jan 25, 2024).
    1. After a bit of experimentation (and in a discovery that led us to collaborate), Southen found that it was in fact easy to generate many plagiaristic outputs, with brief prompts related to commercial films (prompts are shown).

      Plagiaristic outputs from blockbuster films in Midjourney v6

      Was the LLM trained on copyrighted material?

    1. More, essentially all research in self-reference for decades has been in artificial intelligence, which is the device around which this plot turns. The language of AI is LISP, the name of the archvillain. In the heyday of LISP machines, the leading system was Flavors LISP Object Oriented Programming or: you guessed it -- Floop. I myself worked on a defense AI program that included the notion of a `third brain,' that is an observer living in a world different than (1) that of the world's creator, and (2) of the characters.
    1. Searching as exploration. White and Roth [71 ,p.38] define exploratory search as a “sense making activity focusedon the gathering and use of information to foster intellectual de-velopment.” Users who conduct exploratory searches are generallyunfamiliar with the domain of their goals, and unsure about howto achieve them [ 71]. Many scholars have investigated the mainfactors relating to this type of dynamic task, such as uncertainty,creativity, innovation, knowledge discovery, serendipity, conver-gence of ideas, learning, and investigation [2, 46, 71].These factors are not always expressed or evident in queriesor questions posed by a searcher to a search system.

      Sometimes, search is not rooted in discovery of a correct answer to a question. It's about exploration. Serendipity through search. Think Michael Lewis, Malcolm Gladwell, and Latif Nasser from Radiolab. The randomizer on wikipedia. A risk factor of where things trend with advanced AI in search is an abandonment of meaning making through exploration in favor of a knowledge-level pursuit that lacks comparable depth to more exploratory experiences.

    1. the canonical unit, the NCU supports natural capital accounting, currency source, calculating and accounting for ecosystem services, and influences how a variety of governance issues are resolved
      • for: canonical unit, collaborative commons - missing part - open learning commons, question - process trap - natural capital

      • comment

        • in this context, indyweb and Indranet are not the canonical unit, but then, it seems the model is fundamentally missing the functionality provided but the Indyweb and Indranet, which is and open learning system.
        • without such an open learning system that captures the essence of his humans learn, the activity of problem-solving cannot be properly contextualised, along with all of limitations leading to progress traps.
        • The entire approach of posing a problem, then solving it is inherently limited due to the fractal intertwingularity of reality.
      • question: progress trap - natural capital

        • It is important to be aware that there is a real potential for a progress trap to emerge here, as any metric is liable to be abused
    1. it didn’t mention more recent work on how to make large language models more energy efficient and mitigate problems of bias.
      • for: AI ethics controversy - citations from Dean please!

      • comment

        • Can Dean please provide the missing citations he is referring to?
    2. In 2017, Facebook mistranslated a Palestinian man’s post, which said “good morning” in Arabic, as “attack them” in Hebrew, leading to his arrest.
      • for: example - progress trap - AI - mistranslation
    3. because the training data sets are so large, it’s hard to audit them to check for these embedded biases. “A methodology that relies on datasets too large to document is therefore inherently risky,
      • for: AI - untraceability - metaphor

      metaphor - untraceability - AI: like a self configuring engine - Imagine a metaphor in the automobile industry. Imagine a car that could self-design itself. - Now imagine the car breaking down and the owner has to bring it into a repair shop to get it fixed. - The problem is that because the AI car designed its own engine and did not make a record of how that was done, no mechanic can fix it.

      • for: progress trap -AI, carbon footprint - AI, progress trap - AI - bias, progress trap - AI - situatedness
    1. 蔡叡浩  · optnSedors54au 031:Mtf a46 mahe2c, hgeca8 ge3ba2hm3t20m81DA2t9l6ra104  · Shared with Public最近一個叫 Plaud Note的廣告打很兇而我就是在嘖嘖募資時的第一波早鳥這幾天用下來我真的覺得很爛通話錄音品質不好要使用它就必須裸機檔案會偶爾不見錄音轉文字功能勉勉強強//但最神奇的事在臉書上看到的任何業配下方一堆網友曬出自己收到商品的照片並大讚好用反觀去他們嘖嘖募資頁面的留言區那裡災難遍地很多人留言說要退貨到底投入多少經費在做口碑操作All reactions:34 You and 33 others

      真相與行銷的差別 前者要費心挖掘 後者有錢好辦事

  4. Dec 2023
    1. the celebrated figures Henry Kissinge

      I think Kissinger's figure is too controversial to leave it at "celebrated".

    2. David Hume’s (2011) formulation of the is–ought problem.
    3. Beyond simpleassociations it acquires high-level abstractions like expressive structure, ideology or beliefsystems, since these are all embodied in the corpora that make up its training sets.

      hm, I'm not sure how LLMs acquire these higher-level concepts out of the probabilistic relations just described.

    1. Universal Summarizer

      (Summary generated with Kagi's Universal Summarizer.)

      Bandcamp has operated as an online music store for over a decade, providing artists and labels with an easy-to-use platform to sell music directly to fans. While receiving little mainstream attention, Bandcamp has paid out $270 million to artists and maintained a simple, artist-focused design. The platform allows free streaming but encourages direct purchases from artists. Chance the Rapper has been a notable champion of Bandcamp, using it for early mixtapes and helping to bring attention to its role in supporting independent musicians. While other services focus on algorithms and playlists, Bandcamp prioritizes direct artist support through low fees and transparent sales data. It has changed little over the years but provides a niche alternative for direct fan-artist connections without the culture-diluting aspects of other streaming services. Bandcamp's low-key approach has helped it avoid issues faced by competitors while continuing to innovate for artists.

      • for: AI, Anirban Bandyopadhyay, brain gel, AI - gel computer

      • title: A general-purpose organic gel computer that learns by itself

      • author
        • Anirban Bandyopadhyay
        • Pathik Sahoo
        • et al.
      • date: Dec. 6, 2023
      • publication: IOPScience
      • DOI: 10.1088/2634-4386/ad0fec

      • ABSTRACT

        • To build energy minimized superstructures, self-assembling molecules explore astronomical options, colliding ∼10 to 9th power molecules s to power−1. -Thusfar, no computer has used it fully to optimize choices and execute advanced computational theories only by synthesizing supramolecules.
        • To realize it,
          • first, we remotely re-wrote the problem in a language that supramolecular synthesis comprehends.
          • Then, all-chemical neural network synthesizes one helical nanowire for one periodic event. These nanowires self-assemble into gel fibers mapping intricate relations between periodic events in any-data-type,
          • the output is read instantly from optical hologram.
          • Problem-wise, self-assembling layers or neural network depth is optimized to chemically simulate theories discovering invariants for learning.
          • Subsequently, synthesis alone solves classification, feature learning problems instantly with single shot training.
          • Reusable gel begins general-purpose computing that would chemically invent suitable models for problem-specific unsupervised learning. Irrespective of complexity,
            • keeping fixed computing time and power, gel promises a toxic-hardware-free world.
    1. it's extremely dangerous to create such an autonomous agent when we do not know how to control it when we 00:58:22 can't ensure that it will not Escape our control and start making decisions and creating new things which will harm us instead of benefit us now this is not a 00:58:34 Doomsday Prophecy this is not inevitable we can find ways to regulate and control the development and deployment of AI we we don't want
      • for: quote - Yuval Noah Harari - AI progress trap, progress trap - AI, quote - progress trap

      • quote it is extremely dangerous to create such an autonomous agent when we do not know how to control it, when we can't ensure that it will not escape our control ad start making decisions and creating new things which will harm us instead of benefit us

      • author: Yuval Noah Harari
      • date 2023
    2. AIS at present they have intelligence but they don't have any Consciousness right there is a huge confusion in many places 01:04:06 between intelligence and Consciousness intelligence is the ability to solve problems to create new things whatever Consciousness is the ability to have feelings that okay you can beat me in 01:04:19 chess are you joyful when you win are you sad when you lose AIS and computers as far as we know they have a lot of intelligence they have zero Consciousness
      • for: AI - lack feelings
    1. Let's beegin honey 😚⬇️Please show me some examples of evergreen contentPlease show me some examples of evergreen contentClearTry for FreeShareShare Results PageShare This AppSave As ImageCopy Text

      In answer to "Please show me some examples of evergreen content", HiveDesk suggested:

      Evergreen content refers to content that remains relevant and valuable to readers over a long period of time. When it comes to the Hive blockchain, evergreen content might include beginner's guides to using the platform, explanations of key concepts like decentralized finance, or tutorials on how to develop applications on the Hive network.

      Some examples of evergreen content for the Hive blockchain could be: - "A Beginner's Guide to Hive: Getting Started with Decentralized Social Media and Cryptocurrency" - "Understanding Decentralized Finance (DeFi) on the Hive Blockchain" - "How to Build and Deploy DApps on the Hive Network: A Step-by-Step Tutorial"

      These types of content would provide value to anyone interested in learning about or getting involved with the Hive blockchain, regardless of when they come across the information.

      References: - "A Beginner's Guide to Hive: Getting Started with Decentralized Social Media and Cryptocurrency" (hive.io/eco) - "Understanding Decentralized Finance (DeFi) on the Hive Blockchain" (hive.blog) - "How to Build and Deploy DApps on the Hive Network: A Step-by-Step Tutorial" (developers.hive.io)


    1. i think it's more likely that 00:49:59 that we will think we will think that we this particular set of procedures ai procedures that we linked into our strategic nuclear weapons system uh will keep us safer but we haven't recognized that they're 00:50:12 unintended that there are consequences glitches in it that make it actually stupid and it mistakes the flock of geese for an incoming barrage of russian missiles and and you know unleashes everything in response 00:50:25 before we can intervene
      • for: example - stupid AI - nuclear launch, AI - progress trap - example - nuclear launch
    2. i think the most dangerous thing about ai is not 00:47:11 super smart ai it's uh stupid ai it's artificial intelligence that is good enough to be put in charge of certain processes in our societies but not good enough to not make really 00:47:25 bad mistakes
      • for: quote - Thomas Homer-Dixon, quote - danger of AI, AI progress trap

      • quote: danger of AI

        • I think the most dangerous thing about AI is not super smart AI, it's stupid AI that is good enough to be put in charge of certain processes but not good enough to not make really bad mistakes
      • author: Thomas Homer-Dixon
      • date: 2021
    3. there's this broader issue of of being able to get inside other people's heads as we're driving down the road all the time we're looking at other 00:48:05 people and because we have very advanced theories of mind
      • for: comparison - AI - HI - example - driving, comparison - artificial i human intelligence - example - driving
    1. LLM based tool to synthesise scientific K

      #2023/12/12 mentioned by [[Howard Rheingold]] on M.

    1. 这个肩负着Facebook的未来的团队规模很小,由大约 30个研究科学家和15名工程师组成。团队有三个分支:Facebook人工智能研究组的主要办公室位于纽约市的Astor Place,由LeCun管理着一个由20名工程师和研究人员组成的团队。Menlo Park的是一个同等规模的分支。六月,FAIR又在巴黎设立了一个更小的5人组,与INRIA(法国计算机科学与自动化研究机构)合作。还有很多在Facebook其他部门一起合作致力于人工智能发展的团队,例如语言技术团队;FAIR只是主要的研究部门。这些研究人员和工程师来自科技领域的各个层面,同时当中很多人都曾与Lecun合作过。高等级的人工智能研究并非是一个庞大的领域,而且Lecun的很多学生都创建了人工智能方面的初创公司,它们一般会被像Twitter这样更大的企业收购。Lecun曾经告诉《连线》杂志,「深度学习实际上是Geofff Hinton,我,还有蒙特利尔大学的Yoshua Bengio之间的一个阴谋。」 Hinton在谷歌研发人工智能, Bengio奔波于蒙特利尔大学和数据挖掘公司Apstat之间,而LeCun也与其他行业内的著名企业有千丝万缕的关联。


    1. https://web.archive.org/web/20231206090650/https://www.theguardian.com/artanddesign/2023/dec/05/wizard-of-ai-artificial-intelligence-alan-warburton-dangers-film

      20 min 'documentary' about what AI does to artists, made with AI by an artist. ODI commissioned it. Does this type of thing actually help any debate? Does it raise questions more forcefully? I doubt it, more likely reinforcing anyone's pre-existing notions. More a curiosum, then.

    1. https://web.archive.org/web/20231205084502/https://www.theguardian.com/world/2023/dec/01/the-gospel-how-israel-uses-ai-to-select-bombing-targets

      Description of AI use by the Israelian miiltary in Gaza. Vgl [[AI begincondities en evolutie 20190715140742]] wrt the difference between AGI evolution beginning in a military or civic setting, and that AI restraints are applied in the civil side, not in military application meaning the likelihood is there not in civil society. This is true in the EU AI Act too that excludes military from scope.

    1. 以現在AI發展的情況,流暢即席口筆譯根本不是難事。

      Accurate transcription of the source language (SL) is still a bottleneck. 源語語音辨識仍是瓶頸

  5. Nov 2023
    1. This illustration shows four alternative ways to nudge an LLM to produce relevant responses:Generic LLM - Use an off-the-shelf model with a basic prompt. The results can be highly variable, as you can experience when e.g. asking ChatGPT about niche topics. This is not surprising, because the model hasn’t been exposed to relevant data besides the small prompt.Prompt engineering - Spend time structuring the prompt so that it packs more information about the desired topic, tone, and structure of the response. If you do this carefully, you can nudge the responses to be more relevant, but this can be quite tedious, and the amount of relevant data input to the model is limited.Instruction-tuned LLM - Continue training the model with your own data, as described in our previous article. You can expose the model to arbitrary amounts of query-response pairs that help steer the model to more relevant responses. A downside is that training requires a few hours of GPU computation, as well as a custom dataset.Fully custom LLM - train an LLM from scratch. In this case, the LLM can be exposed to only relevant data, so the responses can be arbitrarily relevant. However, training an LLM from scratch takes an enormous amount of compute power and a huge dataset, making this approach practically infeasible for most use cases today.

      RAG with a generic LLM - Insert your dataset in a (vector) database, possibly updating it in real time. At the query time, augment the prompt with additional relevant context from the database, which exposes the model to a much larger amount of relevant data, hopefully nudging the model to give a much more relevant response. RAG with an instruction-tuned LLM - Instead of using a generic LLM as in the previous case, you can combine RAG with your custom fine-tuned model for improved relevancy.

    2. OUTBNDSweb: Retrieval-Augmented Generation: How to Use Your Data to Guide LLMs, https://outerbounds.com/blog/retrieval-augmented-generation/ (accessed 13 Nov 2023)

    1. I could understand why people poured their lives into craft: there is nothing quite like watching someone enjoy a thing you’ve made.

      key point - the connection through creativity. Relate to arts & storytelling

    1. Algorithmocene noun /ˈalɡərɪð mə si:n/ — presumably the next geological epoch following our short-lived Anthropocene

      I'm beginning to prefer the term Algorithmocene to Robotocene.

    1. Fine-tuning takes a pre-trained LLM and further trains the model on a smaller dataset, often with data not previously used to train the LLM, to improve the LLM’s performance for a particular task.

      LLMs can be extended with both RAG and Fine-Tuning Fine-tuning is appropriate when you want to customize a LLM to perform well in a particular domain using private data. For example, you can fine-tune a LLM to become better at producing Python programs by further training the LLM on high-quality Python source code.

      In contrast, you should use RAG when you are able to augment your LLM prompt with data that was not known to your LLM at the time of training, such as real-time data, personal (user) data, or context information useful for the prompt.

    2. Vector databases are used to retrieve relevant documents using similarity search. Vector databases can be standalone or embedded with the LLM application (e.g., Chroma embedded vector database). When structured (tabular) data is needed, an operational data store, such as a feature store, is typically used. Popular vector databases and feature stores are Weaviate and Hopsworks that both provide time-unlimited free tiers.
    3. RAG LLMs can outperform LLMs without retrieval by a large margin with much fewer parameters, and they can update their knowledge by replacing their retrieval corpora, and provide citations for users to easily verify and evaluate the predictions.
    4. HopWORKSweb: Retrieval Augmented Generation (RAG) for LLMs, https://www.hopsworks.ai/dictionary/retrieval-augmented-generation-llm (accessed 09 Nov 2023)

    1. The key enablers of this solution are * The embeddings generated with Vertex AI Embeddings for Text * Fast and scalable vector search by Vertex AI Vector Search

      Embeddings space is a map of the context of the meanings. Basically, values are assigned in n-dimensional space tied to the similar semantic inputs - tying meaning between concepts.

      Example of vectorized n-dimensional embedding

    2. With the embedding API, you can apply the innovation of embeddings, combined with the LLM capability, to various text processing tasks, such as:LLM-enabled Semantic Search: text embeddings can be used to represent both the meaning and intent of a user's query and documents in the embedding space. Documents that have similar meaning to the user's query intent will be found fast with vector search technology. The model is capable of generating text embeddings that capture the subtle nuances of each sentence and paragraphs in the document.LLM-enabled Text Classification: LLM text embeddings can be used for text classification with a deep understanding of different contexts without any training or fine-tuning (so-called zero-shot learning). This wasn't possible with the past language models without task-specific training.LLM-enabled Recommendation: The text embedding can be used for recommendation systems as a strong feature for training recommendation models such as Two-Tower model. The model learns the relationship between the query and candidate embeddings, resulting in next-gen user experience with semantic product recommendation.LLM-enabled Clustering, Anomaly Detection, Sentiment Analysis, and more, can be also handled with the LLM-level deep semantics understanding.
    3. Grounded to business facts: In this demo, we didn't try having the LLM to memorize the 8 million items with complex and lengthy prompt engineering. Instead, we attached the Stack Overflow dataset to the model as an external memory using vector search, and used no prompt engineering. This means, the outputs are all directly "grounded" (connected) to the business facts, not the artificial output from the LLM. So the demo is ready to be served today as a production service with mission critical business responsibility. It does not suffer from the limitation of LLM memory or unexpected behaviors of LLMs such as the hallucinations.
    4. GCloudAIweb: Vertex AI Embeddings for Text: Grounding LLMs made easy, https://cloud.google.com/blog/products/ai-machine-learning/how-to-use-grounding-for-your-llms-with-text-embeddings (accessed 09 Nov 2023)

    1. Preparation Steps * Ingest data into a database. The destination may be an array or a JSON data type. * Harmonize data. This is a lightweight data transformation step * Encode data. This step is used to convert the ingested data into embeddings. One option is to use an external API. For example, OpenAI’s ADA and sentence_transformer have many pre-trained models to convert unstructured data like images and audio into vectors. * Load embedding vectors. data is moved to a table that mirrors the original table but has an additional column of type ‘vector, ’ JSON or a blob that stores the vectors. * Performance tuning. SingleStoreDB provides JSON_ARRAY_PACK. And indexing vector using HNSW as mentioned earlier. This allows parallel scans using SIMD.

    2. In the new AI model, you ingest the data in real time, apply your models by reaching to one or multiple GPT services and action on the data while your users are in the online experience. These GPT models may be used for recommendation, classification personalization, etc., services on real-time data. Recent developments, such as LangChain and AutoGPT, may further disrupt how modern applications are deployed and delivered.
    3. Let’s say, for example, you search for a very specific product on a retailer’s website, and the product is not available. An additional API call to an LLM with your request that returned zero results may result in a list of similar products. This is an example of a vector search, which is also known as a similarity or semantic search.
    4. Modes of Private Data consumption: 1. Train Custom LLM - requires massive infrastructure, investment, and deep AI skills 2. Tune the LLM - utilizes model weights to fine-tune an existing model- new category of LLMOps - similar issue to #1 3. Prompt general-purpose LLMs - uses modeled context input with Retrieval Augmented Generation (Facebook)

      For leveraging prompts, there are two options:

      Short-term memory for LLMs that use APIs for model inputs Long-term memory for LLMs that persist the model inputs. Short-term memory is ephemeral while long-term memory introduces persistence.

    5. Conventional search works on keys. However, when the ask is a natural query, that sentence needs to be converted into a structure so that it can be compared with words that have similar representation. This structure is called an embedding. An embedding uses vectors that assign coordinates into a graph of numbers — like an array. An embedding is high dimensional as it uses many vectors to perform semantic search.

      When a search is made on a new text, the model calculates the “distance” between terms. For example, searching for “king” is closer to “man,” than to “woman.” This distance is calculated on the “nearest neighbors” using functions like, cosine, dot product and Euclidean. his is where “approximate nearest neighbors” (ANN) algorithms are used to reduce the vector search space. A very popular way to index the vector space is through a library called ‘Hierarchical Navigable Small World (HNSW).’ Many vector databases and libraries like FAISS use HNSW to speed up vector search.

    6. The different options for storing and querying vectors for long-term memory in AI search. The options include: * Native vector databases - many non-relational DBMSs are adding vectors such as Elastic. Others are Pinecone Qdrant, etc * SingleStoreDB support vector embeddings and semantic search * Apache Parquet or CSV columnar data - slow indicies if used

    7. AIMONKSweb: How to Use Large Language Models (LLMs) on Private Data: A Data Strategy Guide, https://medium.com/aimonks/how-to-use-large-language-models-llms-on-private-data-a-data-strategy-guide-812cfd7c5c79 (accessed 09 Nov 2023)

    1. Retrieval Augmented Generation (RAG) is a method in natural language processing (NLP) that combines the power of both neural language models and information retrieval methods to generate responses or text that are informed by a large body of knowledge. The concept was introduced by Facebook AI researchers and represents a hybrid approach to incorporating external knowledge into generative models.

      RAG models effectively leverage a large corpus of text data without requiring it to be stored in the parameters of the model. This is achieved by utilizing a retriever-generator framework:

      1. The Retriever component is responsible for finding relevant documents or passages from a large dataset (like Wikipedia or a corpus of scientific articles) that are likely to contain helpful information for generating a response. This retrieval is typically based on vector similarity between the query and the documents in the dataset, often employing techniques like dense passage retrieval (DPR).

      2. The Generator component is a large pre-trained language model (like BART or GPT-2) that generates a response by conditioning on both the input query and the documents retrieved by the retriever. It integrates the information from the external texts to produce more informed, accurate, and contextually relevant text outputs.

      The RAG model performs this process in an end-to-end differentiable, meaning it can be trained in a way that updates both the retriever and generator components to minimize the difference between the generated text and the target text. The retriever is typically optimized to select documents that will lead to a correct generation, while the generator is optimized to produce accurate text given the input query and the retrieved documents.

      To summarize, RAG allows a generative model to:

      • Access vast amounts of structured or unstructured external data.
      • Answer questions or generate content that requires specific knowledge not contained within the model itself.
      • Benefit from up-to-date and expansive datasets, assuming the retriever's corpus is kept current.

      RAG addresses the limitation of standard language models that must rely solely on their internal parameters for generating text. By augmenting generation with on-the-fly retrieval of relevant context, RAG-equipped models can produce more detailed, accurate, and nuanced outputs, especially for tasks like question answering, fact-checking, and content creation where detailed world knowledge is crucial.

      This technique represents a significant advancement in generative AI, allowing models to provide high-quality outputs without memorizing all the facts internally, but rather by knowing (GPT4-0web)

    2. GPT4-0web: What is Retrieval Augmented Generation (RAG)?, https://platform.openai.com/playground?mode=chat&model=gpt-4-1106-preview (accessed 09 Nov 2023)

    1. https://web.archive.org/web/20231108195303/https://axbom.com/aipower/


      Per Axbom does a nice overview of actors and stakeholders to take into account when thinking about AI's impact and ethics. Some of these are mentioned in the [[EU AI Regulation]] but not all actors mentioned there are mentioned here I think: EU act not only defines users (of the application) but also users of the output of an application separately. This to ensure that outputs from un-checked or illegal applications outside the EU market are admissable to the EU market.

    1. There are many stories about the compute footprint (and thus energy footprint) of AI. This is an interesting example: Microsoft doesn't have the capacity to run its own.

      (this is mostly a test to see if the changes I made to the h. template in Obsidian work as intended.)

    1. Salesforce promotes Einstein GPT as the world’s first generative AI tool for CRM. Built on the GPT-3 (Generative Pre-trained Transformer) architecture and integrated in all of Salesforce Clouds as well as Tableau, MuleSoft, and Slack, Einstein GPT is capable of generating natural language responses to customer queries, creating personalized content, and even drafting entire email messages on behalf of sales representatives.

      Curious to see how AI automation solutions may complement with the Experience Cloud Products

    1. that minds are constructed out of cooperating (and occasionally competing) “agents.”

      Vgl how I discussed an application this morning that deployed multiple AI agents as a interconnected network, with each its own role. [[Rolf Aldo Common Ground AI consensus]]

    1. Common Ground can be conceptualised as a multi-player variant of Pol.is. Instead of voting on Statements in isolation, we match participants into small groups of three people where they are encouraged to deliberate over the Statements they vote on, and where an AI moderator powered by GPT4 synthesises new Statements from the content of their discussion.
      • The new statements synthesizing is interesting. Are these checked with the group of 3?
      • Is the voting like in pol.is where you have an increasing 'cost' of voting / spreading attention?
  6. Oct 2023
    1. However, recentresearch shows that people do not always engage with explainability tools enough to help improvedecision making. The assumption that people will engage with recommendations and explanationshas proven to be unfounded


    1. incentive-misalignment problem

      This is provably wrong. 1. Less power-hungry chips are in high demand thanks to mobile computing. 2. Manufacturers keep touting how much less power they consume. 3. Greater power costs greater money. So the incentives are aligned.

    1. Performing optimization in the latent space can more flexibly model underlying data distributions than mechanistic approaches in the original hypothesis space. However, extrapolative prediction in sparsely explored regions of the hypothesis space can be poor. In many scientific disciplines, hypothesis spaces can be vastly larger than what can be examined through experimentation. For instance, it is estimated that there are approximately 1060 molecules, whereas even the largest chemical libraries contain fewer than 1010 molecules12,159. Therefore, there is a pressing need for methods to efficiently search through and identify high-quality candidate solutions in these largely unexplored regions.

      Question: how does this notion of hypothesis space relate to causal inference and reasoning?

    2. Wang et. al. "Scientific discovery in the age of artificial intelligence", Nature, 2023.

      A paper about the current state of using AI/ML for scientific discovery, connected with the AI4Science workshops at major conferences.

      (NOTE: since Springer/Nature don't allow public pdfs to be linked without a paywall, we can't use hypothesis directly on the pdf of the paper, this link is to the website version of it which is what we'll use to guide discussion during the reading group.)

    1. https://web.archive.org/web/20231019053547/https://www.careful.industries/a-thousand-cassandras

      "Despite being written 18 months ago, it lays out many of the patterns and behaviours that have led to industry capture of "AI Safety"", co-author Rachel Coldicutt ( et Anna Williams, and Mallory Knodel for Open Society Foundations. )

      For Open Society Foundations by 'careful industries' which is a research/consultancy, founded 2019, all UK based. Subscribed 2 authors on M, and blog.

      A Thousand Cassandras in Zotero.

  7. www.semanticscholar.org www.semanticscholar.org
    1. Openai is looking to predict performance and safety because models are too big to be evaluated directly. To me this implies a high probability that people start to replace their own capabilities with models not enough safe and relevant. It could cause misalignment between people and their environment, or worse their perception of their environment.

    1. “What are the enduring questions she should be asking herself?” Weiss said. “Is it OK to work alongside an AI for this type of task versus this type of task? Is it taking away from future opportunities or future skills she might have? I think students do have the capacity to reflect, but I’m not sure right now we’re giving them the right questions.”

      Good points & questions to raise

    1. LLMs are merely engines for generating stylistically plausible output that fits the patterns of their inputs, rather than for producing accurate information. Publishers worry that a rise in their use might lead to greater numbers of poor-quality or error-strewn manuscripts — and possibly a flood of AI-assisted fakes.
      • for: progress trap, progress trap - AI, progress trap - AI - writing research papers

      • comment

        • potential fakes
          • climate science fakes by big oil think tanks
          • Covid and virus research
          • race issues
          • gender issues
    1. Plex is a scientific philosophy. Instead of claiming that science is so powerfulthat it can explain the understanding of understanding in question, we takeunderstanding as the open question, and set about to determine what scienceresults. [It turns out to be precisely the science we use every day, so nothingneed be discarded or overturned - but many surprises result. Some very simpleexplanations for some very important scientific observations arise naturally inthe course of Plex development. For example, from the First Definition, thereare several Plex proofs that there was no beginning, contrary to StephenHawking's statement that "this idea that time and space should be finite withoutboundary is just a proposal: it cannot be deduced from some other principle."(A Brief History of Time, p. 136.) The very concept of a "big bang" is strictlyan inherent artifact of our science's view of the nature of nature. There was no"initial instant" of time.]Axioms are assumptions. Plex has no axioms - only definitions. (Only) Noth-ing is assumed to be known without definition, and even that is "by definition" ,

      It doesn't claim that science can explain everything, but rather, it uses science to explore and understand our understanding of the world. The surprising part is that the science it uses is the same science we use daily, so nothing new needs to be learned or old knowledge discarded.

      One example of a surprising discovery made through Plex is that, contrary to Stephen Hawking's theory, there was no beginning to time and space. This contradicts the popular "big bang" theory, which suggests there was an initial moment when time and space began. According to Plex, this idea of a "big bang" is just a result of how our current science views the nature of the universe.

      Plex also differs from other scientific approaches in that it doesn't rely on axioms, which are assumptions made without proof. Instead, Plex only uses definitions, meaning it only accepts as true what can be clearly defined and understood.

      We're saying let's consider the concept of a "big bang". In traditional science, we might assume the existence of a "big bang" like this:

      instead of thinking big_bang = True

      But in Plex, we would only accept the "big bang" if we can define it:

      python def big_bang(): # Define what a "big bang" is # If we can't define it, then it doesn't exist in Plex pass

      Let's not assume reality but rather just try to define the elements we need to use

    1. ethics and safety and that is absolutely a concern and something we have a 00:38:29 responsibility to be thinking about and we want to ensure that we stakeholders conservationists Wildlife biologists field biologists are working together to Define an 00:38:42 ethical framework and inspecting these models
      • for: progress trap, progress trap - AI
    1. Salesforce Einstein chatbot GPT features & capabilities

      How Einstein GPT Differs from Einstein AI: - Einstein GPT is an evolution of Salesforce's Einstein AI technology. - It combines proprietary Einstein AI models with ChatGPT and other language models. - Focus of Einstein GPT is on generating natural language responses and content. - Einstein AI, on the other hand, is more focused on predictive analytics and machine learning. - Integration-wise, Einstein GPT can be integrated with other AI technologies like OpenAI. - The combination of Einstein AI and GPT technology enhances efficiency and customer experiences.

  8. Sep 2023
    1. in 2018 you know it was around four percent of papers were based on Foundation models in 2020 90 were and 00:27:13 that number has continued to shoot up into 2023 and at the same time in the non-human domain it's essentially been zero and actually it went up in 2022 because we've 00:27:25 published the first one and the goal here is hey if we can make these kinds of large-scale models for the rest of nature then we should expect a kind of broad scale 00:27:38 acceleration
      • for: accelerating foundation models in non-human communication, non-human communication - anthropogenic impacts, species extinction - AI communication tools, conservation - AI communication tools

      • comment

        • imagine the empathy we can realize to help slow down climate change and species extinction by communicating and listening to the feedback from other species about what they think of our species impacts on their world!
    2. AI turns semantic relationships into geometric relationships
      • for: key idea, key idea - language research , AI - language research - semantic to geometric
    3. the shape which is say Spanish can't possibly be the same shape as English right if you talk to anthropologists they would say different cultures different cosmologies 00:14:45 different ways of viewing the world different ways of gendering verbs obviously going to be different shapes but you know the AI researchers were like whatever let's just try and they took the shape which is Spanish 00:14:59 and the shape which is English and they literally rotated them on top of each other and the point which his dog ended up in the same spot in both
      • for:AI - language research, AI - language research - semantic invariancy
    1. Looks like this is how you would get the tool to invoke API from different sources like HuggingFace and others.

    1. For a socially and economically sustainable growth path, the labor displacement in the sectors ofapplication must be counterbalanced by job creation within the same and other sector

      it's 2023 and I don't see anyone planning for this massive job displacement, I think that the hollywood strikes are a sign of things to come

    1. the Bodhisattva vow can be seen as a method for control that is in alignment with, and informed by, the understanding that singular and enduring control agents do not actually exist. To see that, it is useful to consider what it might be like to have the freedom to control what thought one had next.
      • for: quote, quote - Michael Levin, quote - self as control agent, self - control agent, example, example - control agent - imperfection, spontaneous thought, spontaneous action, creativity - spontaneity
      • quote: Michael Levin

        • the Bodhisattva vow can be seen as a method for control that is in alignment with, and informed by, the understanding that singular and enduring control agents do not actually exist.
      • comment

        • adjacency between
          • nondual awareness
          • self-construct
          • self is illusion
          • singular, solid, enduring control agent
        • adjacency statement
          • nondual awareness is the deep insight that there is no solid, singular, enduring control agent.
          • creativity is unpredictable and spontaneous and would not be possible if there were perfect control
      • example - control agent - imperfection: start - the unpredictability of the realtime emergence of our next exact thought or action is a good example of this
      • example - control agent - imperfection: end

      • triggered insight: not only are thoughts and actions random, but dreams as well

        • I dreamt the night after this about something related to this paper (cannot remember what it is now!)
        • Obviously, I had no clue the idea in this paper would end up exactly as it did in next night's dream!
      • for: bio-buddhism, buddhism - AI, care as the driver of intelligence, Michael Levin, Thomas Doctor, Olaf Witkowski, Elizaveta Solomonova, Bill Duane, care drive, care light cone, multiscale competency architecture of life, nonduality, no-self, self - illusion, self - constructed, self - deconstruction, Bodhisattva vow
      • title: Biology, Buddhism, and AI: Care as the Driver of Intelligence
      • author: Michael Levin, Thomas Doctor, Olaf Witkowski, Elizaveta Solomonova, Bill Duane, AI - ethics
      • date: May 16, 2022
      • source: https://www.mdpi.com/1099-4300/24/5/710/htm

      • summary

        • a trans-disciplinary attempt to develop a framework to deal with a diversity of emerging non-traditional intelligence from new bio-engineered species to AI based on the Buddhist conception of care and compassion for the other.
        • very thought-provoking and some of the explanations and comparisons to evolution actually help to cast a new light on old Buddhist ideas.
        • this is a trans-disciplinary paper synthesizing Buddhist concepts with evolutionary biology
    2. we attempt to bring concepts from both biology and Buddhism together into the language of AI, and suggest practical ways in which care may enrich each field.
      • for: progress trap, AI, AI - care drive
      • comment
        • the precautionary principle needs to be observed with AI because it has such vast artificial cognitive, pattern-recognition processes at its disposal
        • AI will also make mistakes, but the degree of power behind the mistaken decision, recommendation or action is the degree of unintended consequences or progress trap
        • An example nightmare scenario could be:
          • AI could decide that humans are contradicting their own goal of a stable climate system and if it's in control, may think it knows better and perform whole system change that dramatically reduces human induced climate change but actually harms a lot of humans in the process, for reaching the goal of saving the climate system plus a sufficient subset of humans to start all over.
    1. The zombie has functional consciousness, i.e., all the physical and functional conscious processes studied by scientists, such as global informational access. But there would be nothing it is like to have that global informational access and to be that zombie. All that the zombie cognitive system requires is the capacity to produce phenomenal judgments that it can later report.
      • for: AI - consciousness, zombies, question, question - AI - zombie
      • question: AI
        • is AI a zombie?
        • It would seem that by interviewing AI, there would be no way to tell if it's a zombie or not
          • AI would say all the right things that would try to convince you that it's not a zombie
    1. These Measures do not apply where industry associations, enterprises, education and research institutions, public cultural bodies, and related professional bodies, etc., research, develop, and use generative AI technology, but have not provided generative AI services to the (mainland) public.

      These regulations only apply to public services, not to internal uses of AI.

    1. “What it does is it sucks something from you,” he said of A.I. “It takes something from your soul or psyche; that is very disturbing, especially if it has to do with you. It’s like a robot taking your humanity, your soul.”
    1. Instead of being based on hundreds of thousands of lines of code, like all previous versions of self-driving software, this new system had taught itself how to drive by processing billions of frames of video of how humans do it, just like the new large language model chatbots train themselves to generate answers by processing billions of words of human text.
    1. Big Tech was the main beneficiary as industries and institutions jumped on board, accelerating their own disruption, and civic leaders were focused on how to use these new tools to grow their brands and not on helping us understand the risks.

      This passage really speaks to me here. This is likely the chricton-esqe danger I could see. Apathy from elected officials and general disinterest could really cause the proliferation of un-fettered growth in AI research

    1. inventions have extended man’s physicalpowers rather than the powers of his mind.

      I found this particularly interesting especially considering to the 'AI revolution' of sorts we are experiencing today. With tools such as ChatGPT, one may argue that our 'powers of the mind' will begin to decrease provided that we will become tempted to turn to this tool (and others) to do our work for us. Innovation continues to extend our physical rather than intellectual capabilities.

  9. Aug 2023
    1. Nonetheless, Claude is first AI tool that has really made me pause and think. Because, I’ve got to admit, Claude is a useful tool to think with—especially if I’m thinking about, and then writing about, another text.
    1. Mills, Anna, Maha Bali, and Lance Eaton. “How Do We Respond to Generative AI in Education? Open Educational Practices Give Us a Framework for an Ongoing Process.” Journal of Applied Learning and Teaching 6, no. 1 (June 11, 2023): 16–30. https://doi.org/10.37074/jalt.2023.6.1.34.

      Annotation url: urn:x-pdf:bb16e6f65a326e4089ed46b15987c1e7

      Search: https://jonudell.info/h/facet/?user=chrisaldrich&max=100&exactTagSearch=true&expanded=true&addQuoteContext=true&url=urn%3Ax-pdf%3Abb16e6f65a326e4089ed46b15987c1e7

    2. ignoring AI altogether–not because they don’t wantto navigate it but because it all feels too much or cyclicalenough that something else in another two years will upendeverything again

      Might generative AI worries follow the track of the MOOC scare? (Many felt that creating courseware was going to put educators out of business...)

    3. For many, generative AI takesa pair of scissors and cuts apart that web. And that canfeel like having to start from scratch as a professional.

      How exactly? Give us an example? Otherwise not very clear.

    4. T9 (text prediction):generative AI::handgun:machine gun

      Link to: https://hypothes.is/a/n6wXvkeNEe6DOFexaCD-Qg

    5. Some may not realize it yet, but the shift in technology represented by ChatGPT is just another small evolution in the chain of predictive text with the realms of information theory and corpus linguistics.

      Claude Shannon's work along with Warren Weaver's introduction in The Mathematical Theory of Communication (1948), shows some of the predictive structure of written communication. This is potentially better underlined for the non-mathematician in John R. Pierce's book An Introduction to Information Theory: Symbols, Signals and Noise (1961) in which discusses how one can do a basic analysis of written English to discover that "e" is the most prolific letter or to predict which letters are more likely to come after other letters. The mathematical structures have interesting consequences like the fact that crossword puzzles are only possible because of the repetitive nature of the English language or that one can use the editor's notation "TK" (usually meaning facts or date To Come) in writing their papers to make it easy to find missing information prior to publication because the statistical existence of the letter combination T followed by K is exceptionally rare and the only appearances of it in long documents are almost assuredly areas which need to be double checked for data or accuracy.

      Cell phone manufacturers took advantage of the lower levels of this mathematical predictability to create T9 predictive text in early mobile phone technology. This functionality is still used in current cell phones to help speed up our texting abilities. The difference between then and now is that almost everyone takes the predictive magic for granted.

      As anyone with "fat fingers" can attest, your phone doesn't always type out exactly what you mean which can result in autocorrect mistakes (see: DYAC (Damn You AutoCorrect)) of varying levels of frustration or hilarity. This means that when texting, one needs to carefully double check their work before sending their text or social media posts or risk sending their messages to Grand Master Flash instead of Grandma.

      The evolution in technology effected by larger amounts of storage, faster processing speeds, and more text to study means that we've gone beyond the level of predicting a single word or two ahead of what you intend to text, but now we're predicting whole sentences and even paragraphs which make sense within a context. ChatGPT means that one can generate whole sections of text which will likely make some sense.

      Sadly, as we know from our T9 experience, this massive jump in predictability doesn't mean that ChatGPT or other predictive artificial intelligence tools are "magically" correct! In fact, quite often they're wrong or will predict nonsense, a phenomenon known as AI hallucination. Just as with T9, we need to take even more time and effort to not only spell check the outputs from the machine, but now we may need to check for the appropriateness of style as well as factual substance!

      The bigger near-term problem is one of human understanding and human communication. While the machine may appear to magically communicate (often on our behalf if we're publishing it's words under our names), is it relaying actual meaning? Is the other person reading these words understanding what was meant to have been communicated? Do the words create knowledge? Insight?

      We need to recall that Claude Shannon specifically carved semantics and meaning out of the picture in the second paragraph of his seminal paper:

      Frequently the messages have meaning; that is they refer to or are correlated according to some system with certain physical or conceptual entities. These semantic aspects of communication are irrelevant to the engineering problem.

      So far ChatGPT seems to be accomplishing magic by solving a small part of an engineering problem by being able to explore the adjacent possible. It is far from solving the human semantic problem much less the un-adjacent possibilities (potentially representing wisdom or insight), and we need to take care to be aware of that portion of the unsolved problem. Generative AIs are also just choosing weighted probabilities and spitting out something which is prone to seem possible, but they're not optimizing for which of many potential probabilities is the "best" or the "correct" one. For that, we still need our humanity and faculties for decision making.

      Shannon, Claude E. A Mathematical Theory of Communication. Bell System Technical Journal, 1948.

      Shannon, Claude E., and Warren Weaver. The Mathematical Theory of Communication. University of Illinois Press, 1949.

      Pierce, John Robinson. An Introduction to Information Theory: Symbols, Signals and Noise. Second, Revised. Dover Books on Mathematics. 1961. Reprint, Mineola, N.Y: Dover Publications, Inc., 1980. https://www.amazon.com/Introduction-Information-Theory-Symbols-Mathematics/dp/0486240614.

      Shannon, Claude Elwood. “The Bandwagon.” IEEE Transactions on Information Theory 2, no. 1 (March 1956): 3. https://doi.org/10.1109/TIT.1956.1056774.

      We may also need to explore The Bandwagon, an early effect which Shannon noticed and commented upon. Everyone seems to be piling on the AI bandwagon right now...

    1. 百度首页求vc6个人中心帐号设置意见反馈退出逼近GPT-4,AI编程要革命!Meta开源史上最强代码工具Code Llama播报文章新智元2023-08-25 14:18北京鲲鹏计划获奖作者,优质科技领域创作者关注编辑:编辑部【新智元导读】史上最强开源代码工具Code Llama上线了,Llama-2唯一的编程短板被补平,34B参数的模型已接近GPT-4。凭借开源Llama杀疯的Meta,今天又放大招了!专用编程版的Code Llama正式开源上线,可以免费商用和研究。


    1. Roland Barthes (1915-1980, France, literary critic/theorist) declared the death of the author (in English in 1967 and in French a year later). An author's intentions and biography are not the means to explain definitively what the meaning of a (fictional I think) text is. [[Observator geeft betekenis 20210417124703]] dwz de lezer bepaalt.

      Barthes reduceert auteur to de scribent, die niet verder bestaat dan m.b.t. de voortbrenging van de tekst. Het werk staat geheel los van de maker. Kwam het tegen in [[Information edited by Ann Blair]] in lemma over de Reader.

      Don't disagree with the notion that readers glean meaning in layers from a text that the author not intended. But thinking about the author's intent is one of those layers. Separating the author from their work entirely is cutting yourself of from one source of potential meaning.

      In [[Generative AI detectie doe je met context 20230407085245]] I posit that seeing the author through the text is a neccesity as proof of human creation, not #algogen My point there is that there's only a scriptor and no author who's own meaning, intention and existence becomes visible in a text.

    1. https://www.agconnect.nl/tech-en-toekomst/artificial-intelligence/liquid-neural-networks-in-ai-is-groter-niet-altijd-beter Liquid Neural Networks (liquid i.e. the nodes in a neuronal network remain flexible and adaptable after training (different from deep learning and LL models). They are also smaller. This improves explainability of its working. This reduces energy consumption (#openvraag is the energy consumption of usage a concern or rather the training? here it reduces the usage energy)

      Number of nodes reduction can be orders of magnitude. Autonomous steering example talks about 4 orders of magnitude (19 versus 100k nodes)

      Mainly useful for data streams like audio/video, real time data from meteo / mobility sensors. Applications in areas with limited energy (battery usage) and real time data inputs.

    1. Even director Christopher Nolan is warning that AI could be reaching its "Oppenheimer moment," Insider previously reported — in other words, researchers are questioning their responsibility for developing technology that might have unintended consequences.