27 Matching Annotations
  1. Feb 2024
    1. The core thesis of this page is that as a product manager you need to have a very clear sense of the business drivers that equate to success, and a robust attitude to only prioritising the features that support those drivers.

      The example is a desired feature on the OpenAI dashboard that appeared only recently despite lots of prior feedback from users that it was important to them.

  2. Jan 2024
  3. Jun 2023
    1. Extracting knowledge during weekly reviews

      Key points in this article:

      • capture things of interest / things you learn as you go
      • for speed capture these in the body of your daily note
      • as part of weekly review transfer these ideas into their own notes, update the daily notes to link to the newly formed notes
  4. May 2023
    1. Expand technical AI safety research funding

      Private sector investment in AI research under-emphasises safety and security.

      Most public investment to date has been very narrow, and the paper recommends a significant increase in public funding for technical AI safety research:

      • Alignment of system performance with intended outcomes
      • Robustness and assurance
      • Explainability of results
    2. Introduce measures to prevent and track AI model leaks

      The authors see unauthorised leakage of AI Models as a risk not just to the commercial developers but also for unauthorised use. They recommend government-mandated watermarking for AI models.

    3. Establish liability for AI-caused harm

      AI systems can perform in ways that may be unforeseen, even by their developers, and this risk is expected to grow as different AI systems become interconnected.

      There is currently no clear legal framework in any jurisdiction to assign liability for harm caused by such systems.

      The paper recommends the development of a framework for assigning liability for AI-derived harms, and asserts that this will incentivise profit-driven AI developers to use caution.

    4. Regulate organizations’ access to computational power

      Training of state-of-the-art models consumes vast amounts of computaitonal power, limiting their deployment to only the best-resourced actors.

      To prevent reckless training of high risk models the paper recommends that governments control access to large amounts of specialised compute resource subject to a risk assessment, with an extension of "know your customer" legislation.

    5. Mandate robust third-party auditing and certification for specificAI systems

      Some AI systems will be deployed in contexts that imply risks to physical, mental and/or financial health of individuals, communities or even the whole of society.

      The paper recommends that such systems should be subject to mandatory and independent audit and certification before they are deployed.

    6. Establish capable AI agencies at national level

      Article notes: * UK Office for Artificial Intelligence * EU legislation in progress for an AI Board * US pending legislation (ref Ted Lieu) to create a non-partisan AI Commission tasked with establishing a regulatory agency

      Recommends Korinek's blueprint for an AI regulatory agency:

      1. Monitor public developments in AI progress
      2. Mandate impact assessments of AI systems on various stakeholders
      3. Establish enforcement authority to act upon risks identified in impact assessments
      4. Publish generalized lessons from the impact assessments
    7. Develop standards for identifying and managing AI-generatedcontent and recommendations

      A coherent society requires a shared understanding of what is fact. AI models are capable of generating plausible-sounding but entirely wrong content.

      It is essential that the public can clearly distinguish content by human creators from synthetic content.

      Policy should therefore focus on:

      • funding for development of ways to clearly mark digital content provenance
      • laws to force disclosure of interactions with a chatbot
      • laws to require AI to be deployed in ways that are in the best interest of the user
      • laws that require 'duty of care' when AI deployed in circumstances where a human actor would have a fiduciary responsiblity
    1. How does Copilot in Dynamics 365 and Power Platform work?
      • receives input prompt from user inside app context (e.g. Dynamics 365 or Power Apps
      • access data and documents security trimmed for the user access permitted by Microsoft Graph and Dynamics
      • use this contextual information to ground the LLM query (i.e. provided context in the prompt)
      • post-process the LLM response for security and compliance checks, and to generate app commands
      • return recommended response plus commands back to apps
    1. Limitations

      GPT models are prone to "hallucinations", producing false "facts" and committing error5s of reasoning. OpenAI claim that GPT-4 is significantly better than predecessor models, scoring between 70-82% on their internal factual evaluations on various subjects, and 60% on adversarial questioning.

  5. Mar 2023
    1. The latter skill has become known as “prompt engineering”: the technique of framing one’s instructions in terms most clearly understood by the system, so it returns the results that most closely match expectations – or perhaps exceed them. Tech commentators were quick to predict that prompt engineering would become a sought-after and well remunerated job description in a “no code” future, where the most powerful way of interacting with intelligent systems would be through the medium of human language. No longer would we need to know how to draw, or how to write computer code: we would simply whisper our desires to the machine and it would do the rest. The limits on AI’s creations would be the limits of our own imaginations.

      Not only is "prompt engineering" seen as a source of future employment, it may also be the attack vector of choice against these systems.

  6. Jan 2023
    1. So the special part of this is then you add a README.md file and this can then have HTML code, markdown text, or anything you'd like. Some profile read me's I've seen are really fancy, others are like a mini webpage for content. I have mine set to share some bio information, social media, and then some recent blog posts

      Interesting idea about using combination of Github profile page and some GitHub actions to have an automatically-updated profile

  7. Nov 2022
    1. Trace claims, quotes, and media back to the original context
      • many things on the internet are distorted by removal of context
      • find the original source
      • sense-check the version you are evaluating against the source
    2. Find better coverage
      • look for trusted sources that repeat (or refute) the claim
      • look for consensus
      • do you agree with the consensus?
    3. Investigate the source
      • who wrote this?
      • what is their expertise?
      • what might be their motivation?
    4. Stop
      • sense-check, do you know the website or other source?
      • remember your purpose, and titrate your level of effort accordingly
    5. Four Moves

      A short list of steps to check a source, linked to effective web techniques.

    6. Context is everything - very many claims on the internet are distorted by lack of context.

  8. Oct 2022
    1. With respect to the circadian rhythm, we saw very quiet cells that weren’t metabolically active with the high-fat diet group,
    2. Among the most striking findings, the scientists observed that genes governing extracellular modelling (ECM) and circadian rhythm were regulated by both exercise and obesity across all three tissue types. Obesity up-regulated ECM-related pathways, while exercise down-regulated them. Conversely, exercise up-regulated circadian-related pathways, and obesity down-regulated them.
    3. The investigators determined that there are opposite responses to exercise and obesity across all three tissues and highlight prominent molecular pathways modulated by exercise and obesity.
    1. Pragmatic model of individual motivation, derived from multiple research sources, and aimed at team managers.

      B - Belonging I - Improvement C - Choice E - Equality / Fairness P - Predictability S - Significance

    2. BICEPS acronym is licensed under a Creative Commons Attribution 4.0 International License: Paloma Medina 2015