933 Matching Annotations
  1. Dec 2023
    1. Universal Summarizer

      (Summary generated with Kagi's Universal Summarizer.)

      Bandcamp has operated as an online music store for over a decade, providing artists and labels with an easy-to-use platform to sell music directly to fans. While receiving little mainstream attention, Bandcamp has paid out $270 million to artists and maintained a simple, artist-focused design. The platform allows free streaming but encourages direct purchases from artists. Chance the Rapper has been a notable champion of Bandcamp, using it for early mixtapes and helping to bring attention to its role in supporting independent musicians. While other services focus on algorithms and playlists, Bandcamp prioritizes direct artist support through low fees and transparent sales data. It has changed little over the years but provides a niche alternative for direct fan-artist connections without the culture-diluting aspects of other streaming services. Bandcamp's low-key approach has helped it avoid issues faced by competitors while continuing to innovate for artists.

      • for: AI, Anirban Bandyopadhyay, brain gel, AI - gel computer

      • title: A general-purpose organic gel computer that learns by itself

      • author
        • Anirban Bandyopadhyay
        • Pathik Sahoo
        • et al.
      • date: Dec. 6, 2023
      • publication: IOPScience
      • DOI: 10.1088/2634-4386/ad0fec

      • ABSTRACT

        • To build energy minimized superstructures, self-assembling molecules explore astronomical options, colliding ∼10 to 9th power molecules s to power−1. -Thusfar, no computer has used it fully to optimize choices and execute advanced computational theories only by synthesizing supramolecules.
        • To realize it,
          • first, we remotely re-wrote the problem in a language that supramolecular synthesis comprehends.
          • Then, all-chemical neural network synthesizes one helical nanowire for one periodic event. These nanowires self-assemble into gel fibers mapping intricate relations between periodic events in any-data-type,
          • the output is read instantly from optical hologram.
          • Problem-wise, self-assembling layers or neural network depth is optimized to chemically simulate theories discovering invariants for learning.
          • Subsequently, synthesis alone solves classification, feature learning problems instantly with single shot training.
          • Reusable gel begins general-purpose computing that would chemically invent suitable models for problem-specific unsupervised learning. Irrespective of complexity,
            • keeping fixed computing time and power, gel promises a toxic-hardware-free world.
    1. it's extremely dangerous to create such an autonomous agent when we do not know how to control it when we 00:58:22 can't ensure that it will not Escape our control and start making decisions and creating new things which will harm us instead of benefit us now this is not a 00:58:34 Doomsday Prophecy this is not inevitable we can find ways to regulate and control the development and deployment of AI we we don't want
      • for: quote - Yuval Noah Harari - AI progress trap, progress trap - AI, quote - progress trap

      • quote it is extremely dangerous to create such an autonomous agent when we do not know how to control it, when we can't ensure that it will not escape our control ad start making decisions and creating new things which will harm us instead of benefit us

      • author: Yuval Noah Harari
      • date 2023
    2. AIS at present they have intelligence but they don't have any Consciousness right there is a huge confusion in many places 01:04:06 between intelligence and Consciousness intelligence is the ability to solve problems to create new things whatever Consciousness is the ability to have feelings that okay you can beat me in 01:04:19 chess are you joyful when you win are you sad when you lose AIS and computers as far as we know they have a lot of intelligence they have zero Consciousness
      • for: AI - lack feelings
    1. Let's beegin honey 😚⬇️Please show me some examples of evergreen contentPlease show me some examples of evergreen contentClearTry for FreeShareShare Results PageShare This AppSave As ImageCopy Text

      In answer to "Please show me some examples of evergreen content", HiveDesk suggested:

      Evergreen content refers to content that remains relevant and valuable to readers over a long period of time. When it comes to the Hive blockchain, evergreen content might include beginner's guides to using the platform, explanations of key concepts like decentralized finance, or tutorials on how to develop applications on the Hive network.

      Some examples of evergreen content for the Hive blockchain could be: - "A Beginner's Guide to Hive: Getting Started with Decentralized Social Media and Cryptocurrency" - "Understanding Decentralized Finance (DeFi) on the Hive Blockchain" - "How to Build and Deploy DApps on the Hive Network: A Step-by-Step Tutorial"

      These types of content would provide value to anyone interested in learning about or getting involved with the Hive blockchain, regardless of when they come across the information.

      References: - "A Beginner's Guide to Hive: Getting Started with Decentralized Social Media and Cryptocurrency" (hive.io/eco) - "Understanding Decentralized Finance (DeFi) on the Hive Blockchain" (hive.blog) - "How to Build and Deploy DApps on the Hive Network: A Step-by-Step Tutorial" (developers.hive.io)

      LFG!

    1. i think it's more likely that 00:49:59 that we will think we will think that we this particular set of procedures ai procedures that we linked into our strategic nuclear weapons system uh will keep us safer but we haven't recognized that they're 00:50:12 unintended that there are consequences glitches in it that make it actually stupid and it mistakes the flock of geese for an incoming barrage of russian missiles and and you know unleashes everything in response 00:50:25 before we can intervene
      • for: example - stupid AI - nuclear launch, AI - progress trap - example - nuclear launch
    2. i think the most dangerous thing about ai is not 00:47:11 super smart ai it's uh stupid ai it's artificial intelligence that is good enough to be put in charge of certain processes in our societies but not good enough to not make really 00:47:25 bad mistakes
      • for: quote - Thomas Homer-Dixon, quote - danger of AI, AI progress trap

      • quote: danger of AI

        • I think the most dangerous thing about AI is not super smart AI, it's stupid AI that is good enough to be put in charge of certain processes but not good enough to not make really bad mistakes
      • author: Thomas Homer-Dixon
      • date: 2021
    3. there's this broader issue of of being able to get inside other people's heads as we're driving down the road all the time we're looking at other 00:48:05 people and because we have very advanced theories of mind
      • for: comparison - AI - HI - example - driving, comparison - artificial i human intelligence - example - driving
    1. LLM based tool to synthesise scientific K

      #2023/12/12 mentioned by [[Howard Rheingold]] on M.

    1. 这个肩负着Facebook的未来的团队规模很小,由大约 30个研究科学家和15名工程师组成。团队有三个分支:Facebook人工智能研究组的主要办公室位于纽约市的Astor Place,由LeCun管理着一个由20名工程师和研究人员组成的团队。Menlo Park的是一个同等规模的分支。六月,FAIR又在巴黎设立了一个更小的5人组,与INRIA(法国计算机科学与自动化研究机构)合作。还有很多在Facebook其他部门一起合作致力于人工智能发展的团队,例如语言技术团队;FAIR只是主要的研究部门。这些研究人员和工程师来自科技领域的各个层面,同时当中很多人都曾与Lecun合作过。高等级的人工智能研究并非是一个庞大的领域,而且Lecun的很多学生都创建了人工智能方面的初创公司,它们一般会被像Twitter这样更大的企业收购。Lecun曾经告诉《连线》杂志,「深度学习实际上是Geofff Hinton,我,还有蒙特利尔大学的Yoshua Bengio之间的一个阴谋。」 Hinton在谷歌研发人工智能, Bengio奔波于蒙特利尔大学和数据挖掘公司Apstat之间,而LeCun也与其他行业内的著名企业有千丝万缕的关联。

      FAIR成立的历史

    1. https://web.archive.org/web/20231206090650/https://www.theguardian.com/artanddesign/2023/dec/05/wizard-of-ai-artificial-intelligence-alan-warburton-dangers-film

      20 min 'documentary' about what AI does to artists, made with AI by an artist. ODI commissioned it. Does this type of thing actually help any debate? Does it raise questions more forcefully? I doubt it, more likely reinforcing anyone's pre-existing notions. More a curiosum, then.

    1. https://web.archive.org/web/20231205084502/https://www.theguardian.com/world/2023/dec/01/the-gospel-how-israel-uses-ai-to-select-bombing-targets

      Description of AI use by the Israelian miiltary in Gaza. Vgl [[AI begincondities en evolutie 20190715140742]] wrt the difference between AGI evolution beginning in a military or civic setting, and that AI restraints are applied in the civil side, not in military application meaning the likelihood is there not in civil society. This is true in the EU AI Act too that excludes military from scope.

    1. 以現在AI發展的情況,流暢即席口筆譯根本不是難事。

      Accurate transcription of the source language (SL) is still a bottleneck. 源語語音辨識仍是瓶頸

  2. Nov 2023
    1. This illustration shows four alternative ways to nudge an LLM to produce relevant responses:Generic LLM - Use an off-the-shelf model with a basic prompt. The results can be highly variable, as you can experience when e.g. asking ChatGPT about niche topics. This is not surprising, because the model hasn’t been exposed to relevant data besides the small prompt.Prompt engineering - Spend time structuring the prompt so that it packs more information about the desired topic, tone, and structure of the response. If you do this carefully, you can nudge the responses to be more relevant, but this can be quite tedious, and the amount of relevant data input to the model is limited.Instruction-tuned LLM - Continue training the model with your own data, as described in our previous article. You can expose the model to arbitrary amounts of query-response pairs that help steer the model to more relevant responses. A downside is that training requires a few hours of GPU computation, as well as a custom dataset.Fully custom LLM - train an LLM from scratch. In this case, the LLM can be exposed to only relevant data, so the responses can be arbitrarily relevant. However, training an LLM from scratch takes an enormous amount of compute power and a huge dataset, making this approach practically infeasible for most use cases today.

      RAG with a generic LLM - Insert your dataset in a (vector) database, possibly updating it in real time. At the query time, augment the prompt with additional relevant context from the database, which exposes the model to a much larger amount of relevant data, hopefully nudging the model to give a much more relevant response. RAG with an instruction-tuned LLM - Instead of using a generic LLM as in the previous case, you can combine RAG with your custom fine-tuned model for improved relevancy.

    2. OUTBNDSweb: Retrieval-Augmented Generation: How to Use Your Data to Guide LLMs, https://outerbounds.com/blog/retrieval-augmented-generation/ (accessed 13 Nov 2023)

    1. I could understand why people poured their lives into craft: there is nothing quite like watching someone enjoy a thing you’ve made.

      key point - the connection through creativity. Relate to arts & storytelling

    1. Algorithmocene noun /ˈalɡərɪð mə si:n/ — presumably the next geological epoch following our short-lived Anthropocene

      I'm beginning to prefer the term Algorithmocene to Robotocene.

    1. Fine-tuning takes a pre-trained LLM and further trains the model on a smaller dataset, often with data not previously used to train the LLM, to improve the LLM’s performance for a particular task.

      LLMs can be extended with both RAG and Fine-Tuning Fine-tuning is appropriate when you want to customize a LLM to perform well in a particular domain using private data. For example, you can fine-tune a LLM to become better at producing Python programs by further training the LLM on high-quality Python source code.

      In contrast, you should use RAG when you are able to augment your LLM prompt with data that was not known to your LLM at the time of training, such as real-time data, personal (user) data, or context information useful for the prompt.

    2. Vector databases are used to retrieve relevant documents using similarity search. Vector databases can be standalone or embedded with the LLM application (e.g., Chroma embedded vector database). When structured (tabular) data is needed, an operational data store, such as a feature store, is typically used. Popular vector databases and feature stores are Weaviate and Hopsworks that both provide time-unlimited free tiers.
    3. RAG LLMs can outperform LLMs without retrieval by a large margin with much fewer parameters, and they can update their knowledge by replacing their retrieval corpora, and provide citations for users to easily verify and evaluate the predictions.
    4. HopWORKSweb: Retrieval Augmented Generation (RAG) for LLMs, https://www.hopsworks.ai/dictionary/retrieval-augmented-generation-llm (accessed 09 Nov 2023)

    1. The key enablers of this solution are * The embeddings generated with Vertex AI Embeddings for Text * Fast and scalable vector search by Vertex AI Vector Search

      Embeddings space is a map of the context of the meanings. Basically, values are assigned in n-dimensional space tied to the similar semantic inputs - tying meaning between concepts.

      Example of vectorized n-dimensional embedding

    2. With the embedding API, you can apply the innovation of embeddings, combined with the LLM capability, to various text processing tasks, such as:LLM-enabled Semantic Search: text embeddings can be used to represent both the meaning and intent of a user's query and documents in the embedding space. Documents that have similar meaning to the user's query intent will be found fast with vector search technology. The model is capable of generating text embeddings that capture the subtle nuances of each sentence and paragraphs in the document.LLM-enabled Text Classification: LLM text embeddings can be used for text classification with a deep understanding of different contexts without any training or fine-tuning (so-called zero-shot learning). This wasn't possible with the past language models without task-specific training.LLM-enabled Recommendation: The text embedding can be used for recommendation systems as a strong feature for training recommendation models such as Two-Tower model. The model learns the relationship between the query and candidate embeddings, resulting in next-gen user experience with semantic product recommendation.LLM-enabled Clustering, Anomaly Detection, Sentiment Analysis, and more, can be also handled with the LLM-level deep semantics understanding.
    3. Grounded to business facts: In this demo, we didn't try having the LLM to memorize the 8 million items with complex and lengthy prompt engineering. Instead, we attached the Stack Overflow dataset to the model as an external memory using vector search, and used no prompt engineering. This means, the outputs are all directly "grounded" (connected) to the business facts, not the artificial output from the LLM. So the demo is ready to be served today as a production service with mission critical business responsibility. It does not suffer from the limitation of LLM memory or unexpected behaviors of LLMs such as the hallucinations.
    4. GCloudAIweb: Vertex AI Embeddings for Text: Grounding LLMs made easy, https://cloud.google.com/blog/products/ai-machine-learning/how-to-use-grounding-for-your-llms-with-text-embeddings (accessed 09 Nov 2023)

    1. Preparation Steps * Ingest data into a database. The destination may be an array or a JSON data type. * Harmonize data. This is a lightweight data transformation step * Encode data. This step is used to convert the ingested data into embeddings. One option is to use an external API. For example, OpenAI’s ADA and sentence_transformer have many pre-trained models to convert unstructured data like images and audio into vectors. * Load embedding vectors. data is moved to a table that mirrors the original table but has an additional column of type ‘vector, ’ JSON or a blob that stores the vectors. * Performance tuning. SingleStoreDB provides JSON_ARRAY_PACK. And indexing vector using HNSW as mentioned earlier. This allows parallel scans using SIMD.

    2. In the new AI model, you ingest the data in real time, apply your models by reaching to one or multiple GPT services and action on the data while your users are in the online experience. These GPT models may be used for recommendation, classification personalization, etc., services on real-time data. Recent developments, such as LangChain and AutoGPT, may further disrupt how modern applications are deployed and delivered.
    3. Let’s say, for example, you search for a very specific product on a retailer’s website, and the product is not available. An additional API call to an LLM with your request that returned zero results may result in a list of similar products. This is an example of a vector search, which is also known as a similarity or semantic search.
    4. Modes of Private Data consumption: 1. Train Custom LLM - requires massive infrastructure, investment, and deep AI skills 2. Tune the LLM - utilizes model weights to fine-tune an existing model- new category of LLMOps - similar issue to #1 3. Prompt general-purpose LLMs - uses modeled context input with Retrieval Augmented Generation (Facebook)

      For leveraging prompts, there are two options:

      Short-term memory for LLMs that use APIs for model inputs Long-term memory for LLMs that persist the model inputs. Short-term memory is ephemeral while long-term memory introduces persistence.

    5. Conventional search works on keys. However, when the ask is a natural query, that sentence needs to be converted into a structure so that it can be compared with words that have similar representation. This structure is called an embedding. An embedding uses vectors that assign coordinates into a graph of numbers — like an array. An embedding is high dimensional as it uses many vectors to perform semantic search.

      When a search is made on a new text, the model calculates the “distance” between terms. For example, searching for “king” is closer to “man,” than to “woman.” This distance is calculated on the “nearest neighbors” using functions like, cosine, dot product and Euclidean. his is where “approximate nearest neighbors” (ANN) algorithms are used to reduce the vector search space. A very popular way to index the vector space is through a library called ‘Hierarchical Navigable Small World (HNSW).’ Many vector databases and libraries like FAISS use HNSW to speed up vector search.

    6. The different options for storing and querying vectors for long-term memory in AI search. The options include: * Native vector databases - many non-relational DBMSs are adding vectors such as Elastic. Others are Pinecone Qdrant, etc * SingleStoreDB support vector embeddings and semantic search * Apache Parquet or CSV columnar data - slow indicies if used

    7. AIMONKSweb: How to Use Large Language Models (LLMs) on Private Data: A Data Strategy Guide, https://medium.com/aimonks/how-to-use-large-language-models-llms-on-private-data-a-data-strategy-guide-812cfd7c5c79 (accessed 09 Nov 2023)

    1. Retrieval Augmented Generation (RAG) is a method in natural language processing (NLP) that combines the power of both neural language models and information retrieval methods to generate responses or text that are informed by a large body of knowledge. The concept was introduced by Facebook AI researchers and represents a hybrid approach to incorporating external knowledge into generative models.

      RAG models effectively leverage a large corpus of text data without requiring it to be stored in the parameters of the model. This is achieved by utilizing a retriever-generator framework:

      1. The Retriever component is responsible for finding relevant documents or passages from a large dataset (like Wikipedia or a corpus of scientific articles) that are likely to contain helpful information for generating a response. This retrieval is typically based on vector similarity between the query and the documents in the dataset, often employing techniques like dense passage retrieval (DPR).

      2. The Generator component is a large pre-trained language model (like BART or GPT-2) that generates a response by conditioning on both the input query and the documents retrieved by the retriever. It integrates the information from the external texts to produce more informed, accurate, and contextually relevant text outputs.

      The RAG model performs this process in an end-to-end differentiable, meaning it can be trained in a way that updates both the retriever and generator components to minimize the difference between the generated text and the target text. The retriever is typically optimized to select documents that will lead to a correct generation, while the generator is optimized to produce accurate text given the input query and the retrieved documents.

      To summarize, RAG allows a generative model to:

      • Access vast amounts of structured or unstructured external data.
      • Answer questions or generate content that requires specific knowledge not contained within the model itself.
      • Benefit from up-to-date and expansive datasets, assuming the retriever's corpus is kept current.

      RAG addresses the limitation of standard language models that must rely solely on their internal parameters for generating text. By augmenting generation with on-the-fly retrieval of relevant context, RAG-equipped models can produce more detailed, accurate, and nuanced outputs, especially for tasks like question answering, fact-checking, and content creation where detailed world knowledge is crucial.

      This technique represents a significant advancement in generative AI, allowing models to provide high-quality outputs without memorizing all the facts internally, but rather by knowing (GPT4-0web)

    2. GPT4-0web: What is Retrieval Augmented Generation (RAG)?, https://platform.openai.com/playground?mode=chat&model=gpt-4-1106-preview (accessed 09 Nov 2023)

    1. https://web.archive.org/web/20231108195303/https://axbom.com/aipower/

      https://axbom.com/content/images/size/w2000/2023/11/aipower-axbom-ver1.png

      Per Axbom does a nice overview of actors and stakeholders to take into account when thinking about AI's impact and ethics. Some of these are mentioned in the [[EU AI Regulation]] but not all actors mentioned there are mentioned here I think: EU act not only defines users (of the application) but also users of the output of an application separately. This to ensure that outputs from un-checked or illegal applications outside the EU market are admissable to the EU market.

    1. There are many stories about the compute footprint (and thus energy footprint) of AI. This is an interesting example: Microsoft doesn't have the capacity to run its own.

      (this is mostly a test to see if the changes I made to the h. template in Obsidian work as intended.)

    1. Salesforce promotes Einstein GPT as the world’s first generative AI tool for CRM. Built on the GPT-3 (Generative Pre-trained Transformer) architecture and integrated in all of Salesforce Clouds as well as Tableau, MuleSoft, and Slack, Einstein GPT is capable of generating natural language responses to customer queries, creating personalized content, and even drafting entire email messages on behalf of sales representatives.

      Curious to see how AI automation solutions may complement with the Experience Cloud Products

    1. that minds are constructed out of cooperating (and occasionally competing) “agents.”

      Vgl how I discussed an application this morning that deployed multiple AI agents as a interconnected network, with each its own role. [[Rolf Aldo Common Ground AI consensus]]

    1. Common Ground can be conceptualised as a multi-player variant of Pol.is. Instead of voting on Statements in isolation, we match participants into small groups of three people where they are encouraged to deliberate over the Statements they vote on, and where an AI moderator powered by GPT4 synthesises new Statements from the content of their discussion.
      • The new statements synthesizing is interesting. Are these checked with the group of 3?
      • Is the voting like in pol.is where you have an increasing 'cost' of voting / spreading attention?
  3. Oct 2023
    1. However, recentresearch shows that people do not always engage with explainability tools enough to help improvedecision making. The assumption that people will engage with recommendations and explanationshas proven to be unfounded

      .

    1. incentive-misalignment problem

      This is provably wrong. 1. Less power-hungry chips are in high demand thanks to mobile computing. 2. Manufacturers keep touting how much less power they consume. 3. Greater power costs greater money. So the incentives are aligned.

    1. Performing optimization in the latent space can more flexibly model underlying data distributions than mechanistic approaches in the original hypothesis space. However, extrapolative prediction in sparsely explored regions of the hypothesis space can be poor. In many scientific disciplines, hypothesis spaces can be vastly larger than what can be examined through experimentation. For instance, it is estimated that there are approximately 1060 molecules, whereas even the largest chemical libraries contain fewer than 1010 molecules12,159. Therefore, there is a pressing need for methods to efficiently search through and identify high-quality candidate solutions in these largely unexplored regions.

      Question: how does this notion of hypothesis space relate to causal inference and reasoning?

    2. Wang et. al. "Scientific discovery in the age of artificial intelligence", Nature, 2023.

      A paper about the current state of using AI/ML for scientific discovery, connected with the AI4Science workshops at major conferences.

      (NOTE: since Springer/Nature don't allow public pdfs to be linked without a paywall, we can't use hypothesis directly on the pdf of the paper, this link is to the website version of it which is what we'll use to guide discussion during the reading group.)

    1. https://web.archive.org/web/20231019053547/https://www.careful.industries/a-thousand-cassandras

      "Despite being written 18 months ago, it lays out many of the patterns and behaviours that have led to industry capture of "AI Safety"", co-author Rachel Coldicutt ( et Anna Williams, and Mallory Knodel for Open Society Foundations. )

      For Open Society Foundations by 'careful industries' which is a research/consultancy, founded 2019, all UK based. Subscribed 2 authors on M, and blog.

      A Thousand Cassandras in Zotero.

  4. www.semanticscholar.org www.semanticscholar.org
    1. Openai is looking to predict performance and safety because models are too big to be evaluated directly. To me this implies a high probability that people start to replace their own capabilities with models not enough safe and relevant. It could cause misalignment between people and their environment, or worse their perception of their environment.

    1. “What are the enduring questions she should be asking herself?” Weiss said. “Is it OK to work alongside an AI for this type of task versus this type of task? Is it taking away from future opportunities or future skills she might have? I think students do have the capacity to reflect, but I’m not sure right now we’re giving them the right questions.”

      Good points & questions to raise

    1. LLMs are merely engines for generating stylistically plausible output that fits the patterns of their inputs, rather than for producing accurate information. Publishers worry that a rise in their use might lead to greater numbers of poor-quality or error-strewn manuscripts — and possibly a flood of AI-assisted fakes.
      • for: progress trap, progress trap - AI, progress trap - AI - writing research papers

      • comment

        • potential fakes
          • climate science fakes by big oil think tanks
          • Covid and virus research
          • race issues
          • gender issues
    1. Plex is a scientific philosophy. Instead of claiming that science is so powerfulthat it can explain the understanding of understanding in question, we takeunderstanding as the open question, and set about to determine what scienceresults. [It turns out to be precisely the science we use every day, so nothingneed be discarded or overturned - but many surprises result. Some very simpleexplanations for some very important scientific observations arise naturally inthe course of Plex development. For example, from the First Definition, thereare several Plex proofs that there was no beginning, contrary to StephenHawking's statement that "this idea that time and space should be finite withoutboundary is just a proposal: it cannot be deduced from some other principle."(A Brief History of Time, p. 136.) The very concept of a "big bang" is strictlyan inherent artifact of our science's view of the nature of nature. There was no"initial instant" of time.]Axioms are assumptions. Plex has no axioms - only definitions. (Only) Noth-ing is assumed to be known without definition, and even that is "by definition" ,

      It doesn't claim that science can explain everything, but rather, it uses science to explore and understand our understanding of the world. The surprising part is that the science it uses is the same science we use daily, so nothing new needs to be learned or old knowledge discarded.

      One example of a surprising discovery made through Plex is that, contrary to Stephen Hawking's theory, there was no beginning to time and space. This contradicts the popular "big bang" theory, which suggests there was an initial moment when time and space began. According to Plex, this idea of a "big bang" is just a result of how our current science views the nature of the universe.

      Plex also differs from other scientific approaches in that it doesn't rely on axioms, which are assumptions made without proof. Instead, Plex only uses definitions, meaning it only accepts as true what can be clearly defined and understood.

      We're saying let's consider the concept of a "big bang". In traditional science, we might assume the existence of a "big bang" like this:

      instead of thinking big_bang = True

      But in Plex, we would only accept the "big bang" if we can define it:

      python def big_bang(): # Define what a "big bang" is # If we can't define it, then it doesn't exist in Plex pass

      Let's not assume reality but rather just try to define the elements we need to use

    1. ethics and safety and that is absolutely a concern and something we have a 00:38:29 responsibility to be thinking about and we want to ensure that we stakeholders conservationists Wildlife biologists field biologists are working together to Define an 00:38:42 ethical framework and inspecting these models
      • for: progress trap, progress trap - AI
    1. Salesforce Einstein chatbot GPT features & capabilities

      How Einstein GPT Differs from Einstein AI: - Einstein GPT is an evolution of Salesforce's Einstein AI technology. - It combines proprietary Einstein AI models with ChatGPT and other language models. - Focus of Einstein GPT is on generating natural language responses and content. - Einstein AI, on the other hand, is more focused on predictive analytics and machine learning. - Integration-wise, Einstein GPT can be integrated with other AI technologies like OpenAI. - The combination of Einstein AI and GPT technology enhances efficiency and customer experiences.

  5. Sep 2023
    1. in 2018 you know it was around four percent of papers were based on Foundation models in 2020 90 were and 00:27:13 that number has continued to shoot up into 2023 and at the same time in the non-human domain it's essentially been zero and actually it went up in 2022 because we've 00:27:25 published the first one and the goal here is hey if we can make these kinds of large-scale models for the rest of nature then we should expect a kind of broad scale 00:27:38 acceleration
      • for: accelerating foundation models in non-human communication, non-human communication - anthropogenic impacts, species extinction - AI communication tools, conservation - AI communication tools

      • comment

        • imagine the empathy we can realize to help slow down climate change and species extinction by communicating and listening to the feedback from other species about what they think of our species impacts on their world!
    2. AI turns semantic relationships into geometric relationships
      • for: key idea, key idea - language research , AI - language research - semantic to geometric
    3. the shape which is say Spanish can't possibly be the same shape as English right if you talk to anthropologists they would say different cultures different cosmologies 00:14:45 different ways of viewing the world different ways of gendering verbs obviously going to be different shapes but you know the AI researchers were like whatever let's just try and they took the shape which is Spanish 00:14:59 and the shape which is English and they literally rotated them on top of each other and the point which his dog ended up in the same spot in both
      • for:AI - language research, AI - language research - semantic invariancy
    1. Looks like this is how you would get the tool to invoke API from different sources like HuggingFace and others.

    1. For a socially and economically sustainable growth path, the labor displacement in the sectors ofapplication must be counterbalanced by job creation within the same and other sector

      it's 2023 and I don't see anyone planning for this massive job displacement, I think that the hollywood strikes are a sign of things to come

    1. the Bodhisattva vow can be seen as a method for control that is in alignment with, and informed by, the understanding that singular and enduring control agents do not actually exist. To see that, it is useful to consider what it might be like to have the freedom to control what thought one had next.
      • for: quote, quote - Michael Levin, quote - self as control agent, self - control agent, example, example - control agent - imperfection, spontaneous thought, spontaneous action, creativity - spontaneity
      • quote: Michael Levin

        • the Bodhisattva vow can be seen as a method for control that is in alignment with, and informed by, the understanding that singular and enduring control agents do not actually exist.
      • comment

        • adjacency between
          • nondual awareness
          • self-construct
          • self is illusion
          • singular, solid, enduring control agent
        • adjacency statement
          • nondual awareness is the deep insight that there is no solid, singular, enduring control agent.
          • creativity is unpredictable and spontaneous and would not be possible if there were perfect control
      • example - control agent - imperfection: start - the unpredictability of the realtime emergence of our next exact thought or action is a good example of this
      • example - control agent - imperfection: end

      • triggered insight: not only are thoughts and actions random, but dreams as well

        • I dreamt the night after this about something related to this paper (cannot remember what it is now!)
        • Obviously, I had no clue the idea in this paper would end up exactly as it did in next night's dream!
      • for: bio-buddhism, buddhism - AI, care as the driver of intelligence, Michael Levin, Thomas Doctor, Olaf Witkowski, Elizaveta Solomonova, Bill Duane, care drive, care light cone, multiscale competency architecture of life, nonduality, no-self, self - illusion, self - constructed, self - deconstruction, Bodhisattva vow
      • title: Biology, Buddhism, and AI: Care as the Driver of Intelligence
      • author: Michael Levin, Thomas Doctor, Olaf Witkowski, Elizaveta Solomonova, Bill Duane, AI - ethics
      • date: May 16, 2022
      • source: https://www.mdpi.com/1099-4300/24/5/710/htm

      • summary

        • a trans-disciplinary attempt to develop a framework to deal with a diversity of emerging non-traditional intelligence from new bio-engineered species to AI based on the Buddhist conception of care and compassion for the other.
        • very thought-provoking and some of the explanations and comparisons to evolution actually help to cast a new light on old Buddhist ideas.
        • this is a trans-disciplinary paper synthesizing Buddhist concepts with evolutionary biology
    2. we attempt to bring concepts from both biology and Buddhism together into the language of AI, and suggest practical ways in which care may enrich each field.
      • for: progress trap, AI, AI - care drive
      • comment
        • the precautionary principle needs to be observed with AI because it has such vast artificial cognitive, pattern-recognition processes at its disposal
        • AI will also make mistakes, but the degree of power behind the mistaken decision, recommendation or action is the degree of unintended consequences or progress trap
        • An example nightmare scenario could be:
          • AI could decide that humans are contradicting their own goal of a stable climate system and if it's in control, may think it knows better and perform whole system change that dramatically reduces human induced climate change but actually harms a lot of humans in the process, for reaching the goal of saving the climate system plus a sufficient subset of humans to start all over.
    1. The zombie has functional consciousness, i.e., all the physical and functional conscious processes studied by scientists, such as global informational access. But there would be nothing it is like to have that global informational access and to be that zombie. All that the zombie cognitive system requires is the capacity to produce phenomenal judgments that it can later report.
      • for: AI - consciousness, zombies, question, question - AI - zombie
      • question: AI
        • is AI a zombie?
        • It would seem that by interviewing AI, there would be no way to tell if it's a zombie or not
          • AI would say all the right things that would try to convince you that it's not a zombie
    1. These Measures do not apply where industry associations, enterprises, education and research institutions, public cultural bodies, and related professional bodies, etc., research, develop, and use generative AI technology, but have not provided generative AI services to the (mainland) public.

      These regulations only apply to public services, not to internal uses of AI.

    1. “What it does is it sucks something from you,” he said of A.I. “It takes something from your soul or psyche; that is very disturbing, especially if it has to do with you. It’s like a robot taking your humanity, your soul.”
    1. Instead of being based on hundreds of thousands of lines of code, like all previous versions of self-driving software, this new system had taught itself how to drive by processing billions of frames of video of how humans do it, just like the new large language model chatbots train themselves to generate answers by processing billions of words of human text.
    1. Big Tech was the main beneficiary as industries and institutions jumped on board, accelerating their own disruption, and civic leaders were focused on how to use these new tools to grow their brands and not on helping us understand the risks.

      This passage really speaks to me here. This is likely the chricton-esqe danger I could see. Apathy from elected officials and general disinterest could really cause the proliferation of un-fettered growth in AI research

    1. inventions have extended man’s physicalpowers rather than the powers of his mind.

      I found this particularly interesting especially considering to the 'AI revolution' of sorts we are experiencing today. With tools such as ChatGPT, one may argue that our 'powers of the mind' will begin to decrease provided that we will become tempted to turn to this tool (and others) to do our work for us. Innovation continues to extend our physical rather than intellectual capabilities.

  6. Aug 2023
    1. Nonetheless, Claude is first AI tool that has really made me pause and think. Because, I’ve got to admit, Claude is a useful tool to think with—especially if I’m thinking about, and then writing about, another text.
    1. Mills, Anna, Maha Bali, and Lance Eaton. “How Do We Respond to Generative AI in Education? Open Educational Practices Give Us a Framework for an Ongoing Process.” Journal of Applied Learning and Teaching 6, no. 1 (June 11, 2023): 16–30. https://doi.org/10.37074/jalt.2023.6.1.34.

      Annotation url: urn:x-pdf:bb16e6f65a326e4089ed46b15987c1e7

      Search: https://jonudell.info/h/facet/?user=chrisaldrich&max=100&exactTagSearch=true&expanded=true&addQuoteContext=true&url=urn%3Ax-pdf%3Abb16e6f65a326e4089ed46b15987c1e7

    2. ignoring AI altogether–not because they don’t wantto navigate it but because it all feels too much or cyclicalenough that something else in another two years will upendeverything again

      Might generative AI worries follow the track of the MOOC scare? (Many felt that creating courseware was going to put educators out of business...)

    3. For many, generative AI takesa pair of scissors and cuts apart that web. And that canfeel like having to start from scratch as a professional.

      How exactly? Give us an example? Otherwise not very clear.

    4. T9 (text prediction):generative AI::handgun:machine gun

      Link to: https://hypothes.is/a/n6wXvkeNEe6DOFexaCD-Qg

    5. Some may not realize it yet, but the shift in technology represented by ChatGPT is just another small evolution in the chain of predictive text with the realms of information theory and corpus linguistics.

      Claude Shannon's work along with Warren Weaver's introduction in The Mathematical Theory of Communication (1948), shows some of the predictive structure of written communication. This is potentially better underlined for the non-mathematician in John R. Pierce's book An Introduction to Information Theory: Symbols, Signals and Noise (1961) in which discusses how one can do a basic analysis of written English to discover that "e" is the most prolific letter or to predict which letters are more likely to come after other letters. The mathematical structures have interesting consequences like the fact that crossword puzzles are only possible because of the repetitive nature of the English language or that one can use the editor's notation "TK" (usually meaning facts or date To Come) in writing their papers to make it easy to find missing information prior to publication because the statistical existence of the letter combination T followed by K is exceptionally rare and the only appearances of it in long documents are almost assuredly areas which need to be double checked for data or accuracy.

      Cell phone manufacturers took advantage of the lower levels of this mathematical predictability to create T9 predictive text in early mobile phone technology. This functionality is still used in current cell phones to help speed up our texting abilities. The difference between then and now is that almost everyone takes the predictive magic for granted.

      As anyone with "fat fingers" can attest, your phone doesn't always type out exactly what you mean which can result in autocorrect mistakes (see: DYAC (Damn You AutoCorrect)) of varying levels of frustration or hilarity. This means that when texting, one needs to carefully double check their work before sending their text or social media posts or risk sending their messages to Grand Master Flash instead of Grandma.

      The evolution in technology effected by larger amounts of storage, faster processing speeds, and more text to study means that we've gone beyond the level of predicting a single word or two ahead of what you intend to text, but now we're predicting whole sentences and even paragraphs which make sense within a context. ChatGPT means that one can generate whole sections of text which will likely make some sense.

      Sadly, as we know from our T9 experience, this massive jump in predictability doesn't mean that ChatGPT or other predictive artificial intelligence tools are "magically" correct! In fact, quite often they're wrong or will predict nonsense, a phenomenon known as AI hallucination. Just as with T9, we need to take even more time and effort to not only spell check the outputs from the machine, but now we may need to check for the appropriateness of style as well as factual substance!

      The bigger near-term problem is one of human understanding and human communication. While the machine may appear to magically communicate (often on our behalf if we're publishing it's words under our names), is it relaying actual meaning? Is the other person reading these words understanding what was meant to have been communicated? Do the words create knowledge? Insight?

      We need to recall that Claude Shannon specifically carved semantics and meaning out of the picture in the second paragraph of his seminal paper:

      Frequently the messages have meaning; that is they refer to or are correlated according to some system with certain physical or conceptual entities. These semantic aspects of communication are irrelevant to the engineering problem.

      So far ChatGPT seems to be accomplishing magic by solving a small part of an engineering problem by being able to explore the adjacent possible. It is far from solving the human semantic problem much less the un-adjacent possibilities (potentially representing wisdom or insight), and we need to take care to be aware of that portion of the unsolved problem. Generative AIs are also just choosing weighted probabilities and spitting out something which is prone to seem possible, but they're not optimizing for which of many potential probabilities is the "best" or the "correct" one. For that, we still need our humanity and faculties for decision making.


      Shannon, Claude E. A Mathematical Theory of Communication. Bell System Technical Journal, 1948.

      Shannon, Claude E., and Warren Weaver. The Mathematical Theory of Communication. University of Illinois Press, 1949.

      Pierce, John Robinson. An Introduction to Information Theory: Symbols, Signals and Noise. Second, Revised. Dover Books on Mathematics. 1961. Reprint, Mineola, N.Y: Dover Publications, Inc., 1980. https://www.amazon.com/Introduction-Information-Theory-Symbols-Mathematics/dp/0486240614.

      Shannon, Claude Elwood. “The Bandwagon.” IEEE Transactions on Information Theory 2, no. 1 (March 1956): 3. https://doi.org/10.1109/TIT.1956.1056774.


      We may also need to explore The Bandwagon, an early effect which Shannon noticed and commented upon. Everyone seems to be piling on the AI bandwagon right now...

    1. 百度首页求vc6个人中心帐号设置意见反馈退出逼近GPT-4,AI编程要革命!Meta开源史上最强代码工具Code Llama播报文章新智元2023-08-25 14:18北京鲲鹏计划获奖作者,优质科技领域创作者关注编辑:编辑部【新智元导读】史上最强开源代码工具Code Llama上线了,Llama-2唯一的编程短板被补平,34B参数的模型已接近GPT-4。凭借开源Llama杀疯的Meta,今天又放大招了!专用编程版的Code Llama正式开源上线,可以免费商用和研究。

      这个越来越可怕了,以后越来越有很多的东西能够直接有写代码的能力了

    1. Roland Barthes (1915-1980, France, literary critic/theorist) declared the death of the author (in English in 1967 and in French a year later). An author's intentions and biography are not the means to explain definitively what the meaning of a (fictional I think) text is. [[Observator geeft betekenis 20210417124703]] dwz de lezer bepaalt.

      Barthes reduceert auteur to de scribent, die niet verder bestaat dan m.b.t. de voortbrenging van de tekst. Het werk staat geheel los van de maker. Kwam het tegen in [[Information edited by Ann Blair]] in lemma over de Reader.

      Don't disagree with the notion that readers glean meaning in layers from a text that the author not intended. But thinking about the author's intent is one of those layers. Separating the author from their work entirely is cutting yourself of from one source of potential meaning.

      In [[Generative AI detectie doe je met context 20230407085245]] I posit that seeing the author through the text is a neccesity as proof of human creation, not #algogen My point there is that there's only a scriptor and no author who's own meaning, intention and existence becomes visible in a text.

    1. https://www.agconnect.nl/tech-en-toekomst/artificial-intelligence/liquid-neural-networks-in-ai-is-groter-niet-altijd-beter Liquid Neural Networks (liquid i.e. the nodes in a neuronal network remain flexible and adaptable after training (different from deep learning and LL models). They are also smaller. This improves explainability of its working. This reduces energy consumption (#openvraag is the energy consumption of usage a concern or rather the training? here it reduces the usage energy)

      Number of nodes reduction can be orders of magnitude. Autonomous steering example talks about 4 orders of magnitude (19 versus 100k nodes)

      Mainly useful for data streams like audio/video, real time data from meteo / mobility sensors. Applications in areas with limited energy (battery usage) and real time data inputs.

    1. Even director Christopher Nolan is warning that AI could be reaching its "Oppenheimer moment," Insider previously reported — in other words, researchers are questioning their responsibility for developing technology that might have unintended consequences.
    1. there's no uh uh catastrophe even if things plug along as they're going and there's no mass die off of humans or anything like that 00:36:47 the population is set to decline i don't know when the peak is supposed to come but uh the peak is supposed to come at you know within the next 10 20 years or so 00:36:59 and after that the world population will start to decline how is how is this growth capitalism model growth-based capitalism model how is that going to 00:37:12 function when the world is shrinking
      • for: population decline, economic growth vs population decline
      • comment
        • John makes a good point
        • how will humans negotiate a growth economy when population is shrinking?
        • it may be that AI automation may lessen the need for human capacity, but the future is unknown how these forces will balance out
    1. One of the most common examples was in thefield of criminal justice, where recent revelations have shown that an algorithm used by the UnitedStates criminal justice system had falsely predicted future criminality among African-Americans attwice the rate as it predicted for white people

      holy shit....bad!!!!!

    2. automated decisions

      What are all the automated decisions currently be made by AI systems globally? How to get a database/list of these?

    3. The idea that AI algorithms are free from biases is wrong since the assumptionthat the data injected into the models are unbiased is wrong

      Computational != objective! Common idea rests on lots of assumptions

    1. You agree that Zoom compiles and may compile Service Generated Data based on Customer Content and use of the Services and Software. You consent to Zoom’s access, use, collection, creation, modification, distribution, processing, sharing, maintenance, and storage of Service Generated Data for any purpose, to the extent and in the manner permitted under applicable Law, including for the purpose of product and service development, marketing, analytics, quality assurance, machine learning or artificial intelligence (including for the purposes of training and tuning of algorithms and models), training, testing, improvement of the Services, Software, or Zoom’s other products, services, and software, or any combination thereof, and as otherwise provided in this Agreement.

      "Zoom terms of service now require you to allow AI to train on ALL your data—audio, facial recognition, private conversations—unconditionally and irrevocably, with no opt out.

      Don’t try to negotiate with our new overlords." https://twitter.com/tedgioia/status/1688221240790528000?s=20

  7. Jul 2023
    1. educators and stakeholders must be equipped with the necessary skillsand knowledge

      information literacy

    2. prompt engineering and co-creation with AI

      the engineering would require a sophisticated understanding of the subject, if it is to be done effectively. This serves as an example of the benefits of OEP over OER, and how the creator gains the most through the process.

    3. ven ChatGPT concurs with this view

      Perhaps it would be better to use language that does not give ChatGPT agency.

    4. It does not have theability to introduce novel ideas or concept

      meaning - is not capable of insight?

    5. the unique characteristic of generative AI being non-human implies thepromise of ownership-free educational content.

      But if it requires extensive human intervention, does it remain ownership-free?

    6. Supporting Student Creation of OERs

      Wikipedia experimented w AI generated text and found it needed extensive editing. While that may not save time for Wikipedia editors, that type of mental labor may benefit students engaged in OEP.

    7. AI is anticipated to bring novelinsights and capacities to scientific research and content creation

      Really? I thought insight was beyond the scope of AI.

    Tags

    Annotators

    1. In traditional artforms characterized by direct manipulation [32]of a material (e.g., painting, tattoo, or sculpture), the creator has a direct hand in creating thefinal output, and therefore it is relatively straightforward to identify the creator’s intentions andstyle in the output. Indeed, previous research has shown the relative importance of “intentionguessing” in the artistic viewing experience [33, 34], as well as the increased creative valueafforded to an artwork if elements of the human process (e.g., brushstrokes) are visible [35].However, generative techniques have strong aesthetics themselves [36]; for instance, it hasbecome apparent that certain generative tools are built to be as “realistic” as possible, resultingin a hyperrealistic aesthetic style. As these aesthetics propagate through visual culture, it can bedifficult for a casual viewer to identify the creator’s intention and individuality within the out-puts. Indeed, some creators have spoken about the challenges of getting generative AI modelsto produce images in new, different, or unique aesthetic styles [36, 37].

      Traditional artforms (direct manipulation) versus AI (tools have a built-in aesthetic)

      Some authors speak of having to wrestle control of the AI output from its trained style, making it challenging to create unique aesthetic styles. The artist indirectly influences the output by selecting training data and manipulating prompts.

      As use of the technology becomes more diverse—as consumer photography did over the last century, the authors point out—how will biases and decisions by the owners of the AI tools influence what creators are able to make?

      To a limited extent, this is already happening in photography. The smartphones are running algorithms on image sensor data to construct the picture. This is the source of controversy; see Why Dark and Light is Complicated in Photographs | Aaron Hertzmann’s blog and Putting Google Pixel's Real Tone to the test against other phone cameras - The Washington Post.

    1. That's the way computers are learning today. 00:02:35 We basically write algorithms that allow computers to understand those patterns… And then we get them to try and try and try. And through pattern recognition, through billions of observations, they learn. They're learning by observing. And what are they observing? They're observing a world that's full of greed, disregard for other species, violence, ego, 00:03:05 showing off The only way to be not only intelligent but also to have the right value set is that we start to portray that right value set today. THE PROBLEM IS UNHAPPINESS
      • Machine learning
        • will learn all our bad habits
        • and become supercharged, amplified versions of them
      • The antidote to apocalyptic machine learning
        • is human happiness and wisdom
      • Title
        • One Billion Happy
      • Author

        • Mo Gawdat
      • Description

        • Mo Gawdat was former chief business officer at Google X, Google's innovation center.
        • Mo left Google after seeing the rapid pace of AI development was going to lead to a progress trap in which
          • the risk of AI destroying human civilization is becoming real because AI will be learning from too many unhappy people whose trauma AI will learn and incorporate into its algorithms
        • Hence, human happiness becomes paramount to prevent this catastrophe from happening
      • See Ronald Wright's prescient quote
    2. BY 2029, ARTIFICIALLY INTELLIGENT MACHINES WILL SURPASS HUMAN INTELLIGENCE BY 2049, AI IS PREDICTED TO BE A BILLION TIMES MORE INTELLIGENT THAN US
      • quote
        • 2029 - AI will surpass human intelligence
        • 2049 - AI will be one billion X more intelligent than us
    3. Over the next 15 to 20 years this is going to develop a computer that is much smarter 00:01:20 than all of us. We call that moment singularity.
      • Singularity
        • will happen within the next few decades
    1. even though the existential threats are possible you're concerned with what humans teach I'm concerned 00:07:43 with humans with AI
      • It is the immoral human being that is the real problem
      • they will teach AI to be immoral and with its power, can end up destroying humanity
    2. a nefarious controller of AI presumably could teach it to be immoral
      • bad actor will teach AI to be immoral
      • this also creates an arms race as "good" actors are forced to develop AI to counter the AI of bad actors
    3. the one that 00:05:20 controls AI has enormous power over everyone else
      • AI Arms race is premised on
        • whoever controls AI has enormous powers over everyone else
        • All the world's competing super powers are developing it but with the aim of weaponizing it against its enemies
        • It will be difficult to regulate when so many actors are antagonistic towards each other
    4. alphago
      • Alphago
        • first version took months of Google UK software developers to program. It won the world Go championship.
        • Alphago Master played itself without ever watching a human player. It beat the first Alphago version after 3 days of playing itself.
        • In 21 days, it beat Alphago version one a thousand to zero.
    5. three uh boundaries
      • three boundaries that industry should have abided by but have been violated:
        • don't put them on the open internet until you solve the control problem
        • don't teach them to code because that enables them to learn and develop on their own
        • Don't allow other AI's prompting them, other AI agents working with them
      • Title
        • Mo Gawdat Warns the Dangers of AI Are "Happening As We Speak"
      • Author
        • Piers Morgan Uncensored
    1. Background knowledge refresh

      AI as subject matter expert?

    2. If fine-tuned on pedagogy,

      What does that look like though?

    3. Lesson plan generation / feedback
    4. Studies show that a surprising proportion of teachers do not have a core program but use their own lessons or search TeachersPayTeachers or Pinterest,

      Needs citation

    5. Could that change if every teacher had an assistant, a sort of copilot in the work of taking a class of students (with varying backgrounds, levels of engagement, and readiness-to-learn) from wherever they start to highly skilled, competent, and motivated young people?

      AI for teachers as creating efficiencies around how they use their time. Providing feedback to students as opposed to creating or even leading activities.

    1. The results from both Midjourney and Stable Diffusion seem to be the most convincing and realistic if I was to judge from a human point of view and if I didn't know they were AI generated, I would believe their results.

      Midjourney & Stable Diffusion > Dall-E and Adobe Firefly

    1. AI artificial information processing by the way not artificial intelligence in many ways it could be seen as replicating the functions of the left 00:11:14 hemisphere at frightening speed across the entire globe
      • AI accelerates the left hemisphere view and impacts in the world
  8. Jun 2023
    1. 在[最佳章节]中,关于[插入学习目标]最重要的20%是什么,这将帮助我理解其中的80%
    1. Examples include press releases, short reports, and analysis plans — documents that were reported as realistic for the type of writing these professionals engaged in as part of their work.

      Have in mind the genres tested.

      Looking from a perspective of "how might we use such tools in UX" we're better served by looking at documents that UX generates through the lens of identifying parallels to the study's findings for business documents.

      To use AI to generate drafts, we'll want to look at AI tools built into design tools UXers use to create drafts. Those tools are under development but still developing.

    2. the estimates of how users divided their times between different stages of document generation were based on self-reported numbers

      The numbers for how users divided their time may not be reliable as they're self-reported.

      Still leaves me curious about the accuracy of reported brainstorming time.

    3. the productivity and quality improvements are likely due to a switch in the business professionals’ time allocation: less time spent on cranking out initial draft text and more time spent polishing the final result.

      This points to AI providing the best time savings in draft generation, which fits with the idea of having the AI generate the drafts based on the professional's queries.

      For UX designers, this points to AI in a design tool being most useful when it generates drafts (sketches) that the designer then revises. Where UX deliverables don't compare easily to written deliverables is the contextual factors that influence the design, like style guides or design systems. Design too AI assistants don't yet factor those in, though it seems likely it will, if provided style guides and design systems in a format it can read.

      Given a draft of sufficient quality that it doesn't require longer to revise than a draft the designer would create on their own, getting additional time to refine sounds great.

      I'm not sure what to make of the reduced time to brainstorm when using AI. Without additional information, it's hard not to assume that the AI tool may be influencing the direction of brainstorming as professionals think through the queries they'll use to get the AI to generate the most useful draft possible.

    1. We assume the AI will generate what a human collaborator might generate given the prompt.

      Mistaken human assumptions that AI will generate what a human would given the same prompt are reinforced by claims by those selling AI tools that such tools "understand human language." We don't actually know that AI understands, just that it provides a result that we can interpret as understanding (with the help of our cognitive biases).

      This claim to understanding is especially misleading for neural network-based AI. We don't know how neural networks think. With older Lisp based AI we could at least trace through the code to see how the AI thinks.

    2. we can improve AI interfaces by enabling conversational interactions that can let users establish common ground/shared semantics with the AI, and that provide repair mechanisms when such shared semantics are missing.

      By providing interfaces to AI tools that help us duplicate the aligning, clarifying, and iterating behaviors that we perform with human collaborators we can increase the sense that users can predict what results the AI will provide in subsequent iterations. This will remove the frustration of working with a collaborator that doesn't understand you.

    3. Collaborating with another human is better than working with generative AI in part because conversation allows us to establish common ground, build shared semantics and engage in repair strategies when something is ambiguous.

      Collaborating with humans beats collaborating with AI because we can sync up our mental models, clarify ambiguity, and iterate.

      Current AI tools are limited in the methods they make available to perform these tasks.

    4. finding effective prompts is so difficult that there are websites and forums dedicated to collecting and sharing prompts (e.g. PromptHero, Arthub.ai, Reddit/StableDiffusion). There are also marketplaces for buying and selling prompts (e.g. PromptBase). And there is a cottage industry of research papers on prompt engineering.

      Natural language alone is a poor interface for creating an effective prompt. So bad that communities and businesses are surfacing to help people create effective prompts.

    1. The future of blogging in the AI ​​era, how can we unleash the SEO potential? https://en.itpedia.nl/2023/06/11/de-toekomst-van-bloggen-in-het-ai-tijdperk-hoe-kunnen-we-het-seo-potentieel-ontketenen/ Let's take a look at the future of #blogging in the #AI_era. Does a blogging website still have a future now that visitors can find the answer directly in the browser? Or should we use #AI to improve our #weblog. Can AI help us improve our blog's #SEO?

    1. LeBlanc, D. G., & Lee, G. (2021). General Deep Reinforcement Learning in NES Games. Canadian AI 2021. Canadian Artificial Intelligence Association (CAIAC). https://doi.org/10.21428/594757db.8472938b

    1. They are developing into sophisticated reasoning engines that can contextualize, infer and deduce information in a manner strikingly similar to human thought.

      Is this accurate?

    1. Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

      What is missing here? The one thing with the highest probability as we are already living the impacts: climate. The phrase itself is not just a strategic bait and switch for the AI businesses, but also a more blatant bait and switch wrt climate politics.

    1. We are nowhere near having a self-driving cars on our roads, which confirms that we are nowhere near AGI.

      This does not follow. The reason we don't have self driving cars is because the entire effort is car based not physical environment based. Self driving trains are self driving because of rails and external sensors and signals. Make rails of data, and self driving cars are like trains. No AI, let alone AGI needed. Self driving cars as indicator for AGI make no sense. Vgl https://www.zylstra.org/blog/2015/10/why-false-dilemmas-must-be-killed-to-program-self-driving-cars/ and [[Triz denken in systeemniveaus 20200826114731]]

    1. Note #2: Please read Note #1 above if you haven't already done so. HERE (Note #2), Bard is pandering, giving props for being "thoughtful and nuanced." This is in direct contradiction to what Bard had to say earlier.

      I will sarcastically comment that this is a good mirror of how our society is functioning today. In one situation, for one audience, we may have one point of view, then represent a totally different point of view with a different audience. So much for #authenticity!

    2. Note #1: Ok... so here Bard is saying how utterly unacceptable it is to use the n-word, in ANY circumstances. Please reference Note #2.

    1. we present a novel evidence extraction architecture called ATT-MRC

      A new evidence extraction architecture called ATT-MRC improves the recognition of evidence entities in judgement documents by treating it as a question-answer problem, resulting in better performance than existing methods.

    1. We also compare the answer retrieval performance of a RoBERTa Base classifier against a traditional machine learning model in the legal domain

      Transformer models like RoBERTa outperform traditional machine learning models in legal question answering tasks, achieving significant improvements in performance metrics such as F1-score and Mean Reciprocal Rank.

    1. Learning heterogeneous graph embedding for Chinese legal document similarity

      The paper proposes L-HetGRL, an unsupervised approach using a legal heterogeneous graph and incorporating legal domain-specific knowledge, to improve Legal Document Similarity Measurement (LDSM) with superior performance compared to other methods.

    1. the positive ones is we become good parents we spoke about this last time we we met uh and and it's the only outcome it's the only way I believe we can 01:14:34 create a better future
      • comment
        • the best possible outcome for AI
        • is that we human better
        • othering is significantly reduced
        • the sacred is rediscovered
    2. scary smart is saying the problem with our world today is not that 00:55:36 humanity is bad the problem with our world today is a negativity bias where the worst of us are on mainstream media okay and we show the worst of us on social media
      • "if we reverse this

        • if we have the best of us take charge
        • the best of us will tell AI
          • don't try to kill the the enemy,
            • try to reconcile with the enemy
          • don't try to create a competitive product
            • that allows me to lead with electric cars,
              • create something that helps all of us overcome global climate change
          • that's the interesting bit
            • the actual threat ahead of us is
              • not the machines at all
                • the machines are pure potential pure potential
              • the threat is how we're going to use them"
      • comment

        • again, see Ronald Wright's quote above
        • it's very salient to this context
    3. the biggest threat facing Humanity today is humanity in the age of the machines we were abused we will abuse this
    4. if we give up on human connection we've given up on the remainder of humanity
      • quote
        • "If we give up on human connection, we give up on the remainder of humanity"
    5. with great power comes great responsibility we have disconnected power and responsibility
      • quote
        • "with great power comes great responsibility. We have disconnected power and responsibility."
          • "With great power comes great responsibility
          • We have disconnected power and responsibility
          • so today a 15 year old,
            • emotional without a fully developed prefrontal cortex to make the right decisions yet this is science and we developed our prefrontal cortex fully
            • and at age 25 or so with all of that limbic system emotion and passion
            • would buy a crispr kit and modify a rabbit to become a little more muscular and
            • let it loose in the wild
          • or an influencer who doesn't really know how far the impact of what they're posting online
            • can hurt and cause depression or
            • cause people to feel bad by putting that online
        • There is a disconnect between the power and the responsibility and
        • the problem we have today is that
          • there is a disconnect between those who are writing the code of AI and
          • the responsibility of what's going about to happen because of that code and
          • I feel compassion for the rest of the world
          • I feel that this is wrong
          • I feel that for someone's life to be affected by the actions of others
            • without having a say "
    6. the biggest challenge if you ask me what went wrong in the 20th century 00:42:57 interestingly is that we have given too much power to people that didn't assume the responsibility
      • quote
        • "what went wrong in the 20th century is that we have given too much power to people that didn't assume the responsbility"
    7. this is an arms race has no interest 00:41:29 in what the average human gets out of it it
      • quote
        • "this is an arms race"
    8. tax AI powered businesses at 98 right so suddenly you do what the open letter was trying to do slow them down a little bit and at the same time get enough money to 00:39:34 pay for all of those people that will be disrupted by the technology
      • potential government policy
        • to slow down premature AI rollout
        • by taxing at 98%
    9. the Transformers are not there yet they will not come up with something that hasn't been there before they will come up with the best of everything and 00:26:59 generatively will build a little bit on top of that but very soon they'll come up with things we've never found out we've never known
      • difference between
        • ChatGPT (AI)
        • AGI
    10. I cannot stop why because if I stop and others don't my company goes to hell
      • comment
        • SIMPOL - simultanous conditional agreement, may be the way to reach consensus quickly
    11. the first inevitable is AI will happen by the way there is no 00:23:51 stopping it not because of Any technological issues but because of humanities and inability to trust the other
      • the first inevitable
        • AI will happen
        • there's no stopping it
        • why?
        • self does not trust other
          • in other words,
            • OTHERING is the root problem!
          • this is what will cause an AI arms race
            • Western governments do not trust China or Russia or North Korea(and vice versa)
    12. it's about that we have no way of making sure that it will 00:19:25 have our best interest in mind
      • If AI begins to think autonomously,
        • with its enormous pool of analytic power
        • and if
        • it begins to evolve emotions of fear
        • and it feels humans pose a threat to it or the rest of the natural world
        • it could act against human interest and attempt to destroy it
        • If AI is able to control its environment
          • either coupled with robotics,
          • or controlling human actors
        • it can harm humanity and human civilization
    13. there is a scenario 00:18:21 uh possibly a likely scenario where we live in a Utopia where we really never have to worry again where we stop messing up our our planet because intelligence is not a bad commodity more 00:18:35 intelligence is good the problems in our planet today are not because of our intelligence they are because of our limited intelligence
      • limited (machine) intelligence

        • cannot help but exist
        • if the original (human) authors of the AI code are themselves limited in their intelligence
      • comment

        • this limitation is essentially what will result in AI progress traps
        • Indeed,
          • progress and their shadow artefacts,
          • progress traps,
          • is the proper framework to analyze the existential dilemma posed by AI
      • Interview with Mo Gawdat
        • former Google chief business officer
        • warning about the existential danger of AI
        • including why he claims that AI is
          • intelligent
          • conscious
          • and will soon feel emotions such as fear
            • and take steps at self preservation
    14. they feel 00:09:58 emotions
      • claim
        • AI feels emotions
          • "in my work I describe everything with equations
          • fear is a very simple equation
            • fear is a a moment in the future
              • that is less safe than this moment
          • that's the logic of fear
          • Even though it appears very irrational,
            • machines are capable of making that logic
            • They're capable of saying
              • if a tidal wave is approaching a data center
              • the machine will say
                • that will wipe out my code,
                  • not today's machines
                  • but very very soon and
              • we feel fear and
              • puffer fish feels fear
              • we react differently
                • a puffer fish will puff and
                • we will go for fight or flight
              • the machine might decide to replicate its data to another data center
              • different reactions different ways of feeling the emotion
              • but nonetheless they're all motivated by fear
              • I would dare say that AI will feel more emotions than we will ever do
                • if you just take a simple extrapolation,
                  • we feel more emotions than a puffer fish
                  • because we have the cognitive ability to understand he future
                  • so we can have optimism and pessimism,
                    • emotions puffer fish would never imagine
                  • similarly if we follow that path of artificial intelligence
                  • it is bound to become more intelligent than humans very soon
                  • then then with that wider intellectual horsepower
                  • they probably are going to be pondering concepts we never understood good and
                  • hence if you follow the same trajectory
                  • they might actually end up having more emotions than we will ever feel
    15. the other thing is that you suddenly realize there is a saint that sentience to them
      • claim
        • AI is sentient (alive) because
          • A lot of people think AI will never be alive
          • what is the definition of life?
            • religion will tell you a few things
            • medicine will tell you other things
            • but if we define being sentient as
              • engaging in life with free will and
              • with a sense of awareness of
                • where you are in life and
                • what surrounds you and
                • to have a beginning of that life and
                • an end to that life
              • then AI is sentient in every way
              • there is a free will
              • and there is evolution
              • there is agency
                • so they can affect their decisions in the world
              • and there is a very deep level of consciousness
              • maybe not in the spiritual sense yet but
              • if you define consciousness as
                • a form of awareness of oneself and ones surrounding
                • and you know others
              • then AI is definitely aware"
    16. one day um Friday after lunch I am going back to my office and one of them in front of my eyes you know lowers the arm and picks a 00:07:12 yellow ball
      • story
        • Mo Gawdat tells the story of an epiphany of machine sentience
        • " one day um Friday after lunch I am going back to my office and
        • one of them in front of my eyes lowers the arm and picks a soft yellow ball
        • which again is a coincidence
        • it's not science at all it's

          • like if you keep trying a million times your one time it will be right

          • and it shows it to the camera it's locked as a yellow ball and

          • I joke about it you know going to the third floor saying
          • hey we spent all of those millions of dollars for a yellow board and
            • Monday morning, every one of them is picking every yellow ball
            • a couple of weeks later every one of them is picking everything right and
            • it it hit me very very strongly
          • the speed
          • the capability
            • understand that we take those things for granted
            • but for a child to be able to pick a yellow ball
              • is a mathematical / spatial calculation
                • with muscle coordination
                • with intelligence
              • it is not a simple task at all to cross the street
              • it's not a simple task at all
                • to understand what I'm telling you
                • and interpret it
                • and build Concepts around it
              • we take those things for granted
              • but there are enormous Feats of intelligence"
    17. the change is not we're not talking 20 40. we're talking 2025 2026
      • comment
        • a scary thought that our world will be radically transformed
          • not in 20 to 40 years
          • but in 2 or 3 years!
    18. it could be a few months away
      • claim
      • AI can become more intelligent than humans in a few months (in 2023?)
    19. we've talked we always said don't put them on the open internet until we know 00:01:54 what we're putting out in the world
      • AI arms race
        • tech companies made a promise
          • not to put AI onto the open internet until
          • they know how it's impacting society
        • Unfortunately, tech companies
          • failed at regulating themselves
          • and now, capitalism has started an AI arms race
          • with unpredictable results as AI harvests more data
          • and grows its artificial intelligence unregulated
          • with each passing
    20. AI could manipulate or figure out a way to kill humans your 10 years time will be hiding from the machines if you don't have kids maybe wait a number of years 00:01:43 just so that we have a bit of certainty
      • claim
        • AI could find a way to kill humans in the next few years
    21. it is beyond an emergency it's the biggest thing we need to do today it's bigger than climate change that the former Chief business Officer 00:01:04 of Google X an AI expert and best-selling author he's on a mission to save the world from AI before it's too late
      • claim
      • AI dilemma is bigger problem than climate change
    22. they feel emotions they're alive
      • claim
        • AI is conscious
        • AI feels emotion
  9. May 2023
    1. communication partners

      super interesting that Luhmann referred to his zettelkasten as a communication partner explicitly himself.

      also interesting given AI models are easier to train now with several models already open sourced which allows actual interaction with your notes! would love to see where it goes.

    1. I would submit that were we to find ways of engineering our quote-unquote ape brains um what would all what what would be very likely to happen would not be um 00:35:57 some some sort of putative human better equipped to deal with the complex world that we have it would instead be something more like um a cartoon very much very very much a 00:36:10 repeat of what we've had with the pill
      • Comment
        • Mary echos Ronald Wright's progress traps
    2. with their new different and perhaps bigger brains the AIS of the future may prove themselves to be better adapted to 00:19:05 life in this transhuman world that we're in now
      • comment
        • Is this not a category error in classifying inert technology as life?
        • When does an abiotic human cultural artefact become a living form?
    1. Deep Learning (DL) A Technique for Implementing Machine LearningSubfield of ML that uses specialized techniques involving multi-layer (2+) artificial neural networksLayering allows cascaded learning and abstraction levels (e.g. line -> shape -> object -> scene)Computationally intensive enabled by clouds, GPUs, and specialized HW such as FPGAs, TPUs, etc.

      [29] AI - Deep Learning

    1. The object of the present volume is to point out the effects and the advantages which arise from the use of tools and machines ;—to endeavour to classify their modes of action ;—and to trace both the causes and the consequences of applying machinery to supersede the skill and power of the human arm.

      [28] AI - precedents...

    1. Epidemiologist Michael Abramson, who led the research, found that the participants who texted more often tended to work faster but score lower on the tests.

      [21] AI - Skills Erosion

    1. An AI model taught to view racist language as normal is obviously bad. The researchers, though, point out a couple of more subtle problems. One is that shifts in language play an important role in social change; the MeToo and Black Lives Matter movements, for example, have tried to establish a new anti-sexist and anti-racist vocabulary. An AI model trained on vast swaths of the internet won’t be attuned to the nuances of this vocabulary and won’t produce or interpret language in line with these new cultural norms. It will also fail to capture the language and the norms of countries and peoples that have less access to the internet and thus a smaller linguistic footprint online. The result is that AI-generated language will be homogenized, reflecting the practices of the richest countries and communities.

      [21] AI Nuances

    1. According to him, there are several goals connected to AI alignment that need to be addressed:

      [20] AI - Alignment Goals

    1. The following table lists the results that we visualized in the graphic.

      [18] AI - Increased sophistication

    1. A novel architecture that makes it possible for generativeagents to remember, retrieve, reflect, interact with otheragents, and plan through dynamically evolving circumstances.The architecture leverages the powerful prompting capabili-ties of large language models and supplements those capa-bilities to support longer-term agent coherence, the abilityto manage dynamically-evolving memory, and recursivelyproduce more generations.

      AI is turning humans to look inward for a new take on life as our identities and roles within society are being profoundly disrupted and transformed by Artificial Intelligence systems that can replicate or exhibit human-like behavior. It is also a great reminder of how complex social interactions are.

    1. Expand technical AI safety research funding

      Private sector investment in AI research under-emphasises safety and security.

      Most public investment to date has been very narrow, and the paper recommends a significant increase in public funding for technical AI safety research:

      • Alignment of system performance with intended outcomes
      • Robustness and assurance
      • Explainability of results
    2. Introduce measures to prevent and track AI model leaks

      The authors see unauthorised leakage of AI Models as a risk not just to the commercial developers but also for unauthorised use. They recommend government-mandated watermarking for AI models.

    3. Establish liability for AI-caused harm

      AI systems can perform in ways that may be unforeseen, even by their developers, and this risk is expected to grow as different AI systems become interconnected.

      There is currently no clear legal framework in any jurisdiction to assign liability for harm caused by such systems.

      The paper recommends the development of a framework for assigning liability for AI-derived harms, and asserts that this will incentivise profit-driven AI developers to use caution.

    4. Regulate organizations’ access to computational power

      Training of state-of-the-art models consumes vast amounts of computaitonal power, limiting their deployment to only the best-resourced actors.

      To prevent reckless training of high risk models the paper recommends that governments control access to large amounts of specialised compute resource subject to a risk assessment, with an extension of "know your customer" legislation.

    5. Mandate robust third-party auditing and certification for specificAI systems

      Some AI systems will be deployed in contexts that imply risks to physical, mental and/or financial health of individuals, communities or even the whole of society.

      The paper recommends that such systems should be subject to mandatory and independent audit and certification before they are deployed.

    6. Establish capable AI agencies at national level

      Article notes: * UK Office for Artificial Intelligence * EU legislation in progress for an AI Board * US pending legislation (ref Ted Lieu) to create a non-partisan AI Commission tasked with establishing a regulatory agency

      Recommends Korinek's blueprint for an AI regulatory agency:

      1. Monitor public developments in AI progress
      2. Mandate impact assessments of AI systems on various stakeholders
      3. Establish enforcement authority to act upon risks identified in impact assessments
      4. Publish generalized lessons from the impact assessments
    7. Develop standards for identifying and managing AI-generatedcontent and recommendations

      A coherent society requires a shared understanding of what is fact. AI models are capable of generating plausible-sounding but entirely wrong content.

      It is essential that the public can clearly distinguish content by human creators from synthetic content.

      Policy should therefore focus on:

      • funding for development of ways to clearly mark digital content provenance
      • laws to force disclosure of interactions with a chatbot
      • laws to require AI to be deployed in ways that are in the best interest of the user
      • laws that require 'duty of care' when AI deployed in circumstances where a human actor would have a fiduciary responsiblity
    1. Oregon State University will build a state-of-the-art artificial intelligence research center with a supercomputer and a cyberphysical playground.
    1. must have an alignment property

      It is unclear what form the "alignment property" would take, and most importantly how such a property would be evaluated especially if there's an arbitrary divide between "dangerous" and "pre-dangerous" levels of capabilities and alignment of the "dangerous" levels cannot actually be measured.