485 Matching Annotations
  1. Feb 2023
    1. Several participants noted the occasionally surreal quality of Wordcraft's suggestions.

      Wordcraft's hallucinations can create interesting and creatively surreal suggestions.

      How might one dial up or down the ability to hallucinate or create surrealism within an artificial intelligence used for thinking, writing, etc.?

    2. Writers struggled with the fickle nature of the system. They often spent a great deal of time wading through Wordcraft's suggestions before finding anything interesting enough to be useful. Even when writers struck gold, it proved challenging to consistently reproduce the behavior. Not surprisingly, writers who had spent time studying the technical underpinnings of large language models or who had worked with them before were better able to get the tool to do what they wanted.

      Because one may need to spend an inordinate amount of time filtering through potentially bad suggestions of artificial intelligence, the time and energy spent keeping a commonplace book or zettelkasten may pay off magnificently in the long run.

    3. Many authors noted that generations tended to fall into clichés, especially when the system was confronted with scenarios less likely to be found in the model's training data. For example, Nelly Garcia noted the difficulty in writing about a lesbian romance — the model kept suggesting that she insert a male character or that she have the female protagonists talk about friendship. Yudhanjaya Wijeratne attempted to deviate from standard fantasy tropes (e.g. heroes as cartographers and builders, not warriors), but Wordcraft insisted on pushing the story toward the well-worn trope of a warrior hero fighting back enemy invaders.

      Examples of artificial intelligence pushing toward pre-existing biases based on training data sets.

    4. Wordcraft tended to produce only average writing.

      How to improve on this state of the art?

    5. “...it can be very useful for coming up with ideas out of thin air, essentially. All you need is a little bit of seed text, maybe some notes on a story you've been thinking about or random bits of inspiration and you can hit a button that gives you nearly infinite story ideas.”- Eugenia Triantafyllou

      Eugenia Triantafyllou is talking about crutches for creativity and inspiration, but seems to miss the value of collecting interesting tidbits along the road of life that one can use later. Instead, the emphasis here becomes one of relying on an artificial intelligence doing it for you at the "hit of a button". If this is the case, then why not just let the artificial intelligence do all the work for you?

      This is the area where the cultural loss of mnemonics used in orality or even the simple commonplace book will make us easier prey for (over-)reliance on technology.


      Is serendipity really serendipity if it's programmed for you?

    6. The authors agreed that the ability to conjure ideas "out of thin air" was one of the most compelling parts of co-writing with an AI model.

      Again note the reference to magic with respect to the artificial intelligence: "the ability to conjure ideas 'out of thin air'".

    7. Wordcraft shined the most as a brainstorming partner and source of inspiration. Writers found it particularly useful for coming up with novel ideas and elaborating on them. AI-powered creative tools seem particularly well suited to sparking creativity and addressing the dreaded writer's block.

      Just as using a text for writing generative annotations (having a conversation with a text) is a useful exercise for writers and thinkers, creative writers can stand to have similar textual creativity prompts.

      Compare Wordcraft affordances with tools like Nabokov's card index (zettelkasten) method, Twyla Tharp's boxes, MadLibs, cadavre exquis, et al.

      The key is to have some sort of creativity catalyst so that one isn't working in a vacuum or facing the dreaded blank page.

    8. We like to describe Wordcraft as a "magic text editor". It's a familiar web-based word processor, but under the hood it has a number of LaMDA-powered writing features that reveal themselves depending on the user's activity.

      The engineers behind Wordcraft refer to it "as a 'magic text editor'". This is a cop-out for many versus a more concrete description of what is actually happening under the hood of the machine.

      It's also similar, thought subtly different to the idea of the "magic of note taking" by which writers are taking about ideas of emergent creativity and combinatorial creativity which occur in that space.

    9. The application is powered by LaMDA, one of the latest generation of large language models. At its core, LaMDA is a simple machine — it's trained to predict the most likely next word given a textual prompt. But because the model is so large and has been trained on a massive amount of text, it's able to learn higher-level concepts.

      Is LaMDA really able to "learn higher-level concepts" or is it just a large, straight-forward information theoretic-based prediction engine?

    10. Our team at Google Research built Wordcraft, an AI-powered text editor centered on story writing, to see how far we could push the limits of this technology.
    1. https://pair.withgoogle.com/

      People + AI Research (PAIR) is a multidisciplinary team at Google that explores the human side of AI by doing fundamental research, building tools, creating design frameworks, and working with diverse communities.

    1. Author's note by Robin Sloan<br /> November 2022

    2. I have to report that the AI did not make a useful or pleasant writing partner. Even a state-of-the-art language model cannot presently “understand” what a fiction writer is trying to accomplish in an evolving draft. That’s not unreasonable; often, the writer doesn’t know exactly what they’re trying to accom­plish! Often, they are writing to find out.
    3. First, I’m impressed as hell by the Wordcraft team. Daphne Ippolito, Ann Yuan, Andy Coenen, Sehmon Burnam, and their colleagues engi­neered an impres­sive, provoca­tive writing tool, but/and, more importantly, they inves­ti­gated its use with sensi­tivity and courage.
    1. A Luhmann web article from 2001-06-30!

      Berzbach, Frank. “Künstliche Intelligenz aus Holz.” Online magazine. Magazin für junge Forschung, June 30, 2001. https://sciencegarden.net/kunstliche-intelligenz-aus-holz/.


      Interesting to see the stark contrast in zettelkasten method here in an article about Luhmann versus the discussions within the blogosphere, social media, and other online spaces circa 2018-2022.


      ᔥ[[Daniel Lüdecke]] in Arbeiten mit (elektronischen) Zettelkästen at 2013-08-30 (accessed:: 2023-02-10 06:15:58)

    1. The breakthroughs are all underpinned by a new class of AI models that are more flexible and powerful than anything that has come before. Because they were first used for language tasks like answering questions and writing essays, they’re often known as large language models (LLMs). OpenAI’s GPT3, Google’s BERT, and so on are all LLMs. But these models are extremely flexible and adaptable. The same mathematical structures have been so useful in computer vision, biology, and more that some researchers have taken to calling them "foundation models" to better articulate their role in modern AI.

      Foundation Models in AI

      Large language models, more generally, are “foundation models”. They got the large-language name because that is where they were first applied.

  2. Jan 2023
    1. To start with, a human must enter a prompt into a generative model in order to have it create content. Generally speaking, creative prompts yield creative outputs. “Prompt engineer” is likely to become an established profession, at least until the next generation of even smarter AI emerges.

      Generative AI requires prompt engineering, likely a new profession

      What domain experience does a prompt engineer need? How might this relate to relate to specialty in librarianship?

    1. We appreciate this is a long span of time, and were concerned why any specific artificial memory system should last for so long.

      I suspect that artificial memory systems, particularly those that make some sort of logical sense, will indeed be long lasting ones.

      Given the long, unchanging history of the Acheulean hand axe, as an example, these sorts of ideas and practices were handed down from generation to generation.

      Given their ties to human survival, they're even more likely to persist.

      Indigenous memory systems in Aboriginal settings date to 65,000 years and also provide an example of long-lived systems.

    2. Francesco d'Errico has done much to advance our understanding of artificial/ external memory systems.
    3. These may occur on rock walls, but were commonly engraved onto robust bones since at least the beginning of the European Upper Palaeolithic and African Late Stone Age, where it is obvious they served as artificial memory systems (AMS) or external memory systems (EMS) to coin the terms used in Palaeolithic archaeology and cognitive science respectively, exosomatic devices in which number sense is clearly evident (for definitions see d’Errico Reference d'Errico1989; Reference d'Errico1995a,Reference d'Erricob; d'Errico & Cacho Reference d'Errico and Cacho1994; d'Errico et al. Reference d'Errico, Doyon and Colage2017; Hayden Reference Hayden2021).

      Abstract marks have appeared on rock walls and engraved into robust bones as artificial memory systems (AMS) and external memory systems (EMS).

    1. Fried-berg Judeo-Arabic Project, accessible at http://fjms.genizah.org. This projectmaintains a digital corpus of Judeo-Arabic texts that can be searched and an-alyzed.

      The Friedberg Judeo-Arabic Project contains a large corpus of Judeo-Arabic text which can be manually searched to help improve translations of texts, but it might also be profitably mined using information theoretic and corpus linguistic methods to provide larger group textual translations and suggestions at a grander scale.

    2. More recent ad-ditions to the website include a “jigsaw puzzle” screen that lets users viewseveral items while playing with them to check whether they are “joins.” An-other useful feature permits the user to split the screen into several panelsand, thus, examine several items simultaneously (useful, e.g., when compar-ing handwriting in several documents). Finally, the “join suggestions” screenprovides the results of a technologically groundbreaking computerized anal-ysis of paleographic and codiocological features that suggests possible joinsor items written by the same scribe or belonging to the same codex. 35

      Computer means can potentially be used to check or suggest potential "joins" of fragments of historical documents.

      An example of some of this work can be seen in the Friedberg Genizah Project and their digital tools.

  3. Dec 2022
    1. The History of Zettelkasten The Zettelkasten method is a note-taking system developed by German sociologist and philosopher Niklas Luhmann. It involves creating a network of interconnected notes on index cards or in a digital database, allowing for flexible organization and easy access to information. The method has been widely used in academia and can help individuals better organize their thoughts and ideas.

      https://meso.tzyl.nl/2022/12/05/the-history-of-zettelkasten/

      If generated, it almost perfect reflects the public consensus, but does a miserable job of reflecting deeper realities.

  4. Nov 2022
    1. Title : Artificial Intelligence and Democratic Values: Next Steps for the United States Content : In Dartmouth University , appears AI as sciences however USA motionless a national AI policy comparing to Europe where The Council of Europe is developing the first international AI convention and earlier UE launched the European data privacy law, the General Data Privacy Regulation.

      In addition, China's efforts to become “world leader in AI by 2030, as long as China is developing a digital structures matched with The one belt one road project . USA , did not contribute to UNESCO AI Recommendations however USA It works to promote democratic values and human rights and integrate them with the governance of artificial intelligence .

      USA and UE are facing challenges with transatlantic data flows , with Ukrainian crises The situation became more difficult. In order to reinstate leadership in AI policy, the United States should advance the policy initiative launched last year by the Office of Science and Technology Policy (OSTP) and Strengthening efforts to support AI Bill of rights .

      EXCERPT: USA believe that foster public trust and confidence in AI technologies and protect civil liberties, privacy, and American values in their application can establish responsible AI in USA. Link: https://www.cfr.org/blog/artificial-intelligence-and-democratic-values-next-steps-united-states Topic : AI and Democratic values Country : United States of America

    1. https://infiniteconversation.com/

      an AI generated, never-ending discussion between Werner Herzog and Slavoj Žižek. Everything you hear is fully generated by a machine. The opinions and beliefs expressed do not represent anyone. They are the hallucinations of a slab of silicon.

  5. Oct 2022
    1. https://www.explainpaper.com/

      Another in a growing line of research tools for processing and making sense of research literature including Research Rabbit, Connected Papers, Semantic Scholar, etc.

      Functionality includes the ability to highlight sections of research papers with natural language processing to explain what those sections mean. There's also a "chat" that allows you to ask questions about the paper which will attempt to return reasonable answers, which is an artificial intelligence sort of means of having an artificial "conversation with the text".

      cc: @dwhly @remikalir @jeremydean

    1. I would put creativity into three buckets. If we define creativity as coming up with something novel or new for a purpose, then I think what AI systems are quite good at the moment is interpolation and extrapolation.

      Demis Hassabis, the founder of DeepMind, classifies creativity in three ways: interpolation, extrapolation, and "true invention". He defines the first two traditionally, but gives a more vague description of the third. What exactly is "true invention"?

      How can one invent without any catalyst at all? How can one invent outside of a problem's solution space? outside of the adjacent possible? Does this truly exist? Or doesn't it based on definition.

  6. Sep 2022
    1. https://www.scientificamerican.com/article/information-overload-helps-fake-news-spread-and-social-media-knows-it/

      Good overview article of some of the psychology research behind misinformation in social media spaces including bots, AI, and the effects of cognitive bias.

      Probably worth mining the story for the journal articles and collecting/reading them.

    2. Information Overload

      Recall that this isn't new:

      Blair, Ann M. Too Much to Know: Managing Scholarly Information before the Modern Age. Yale University Press, 2010. https://yalebooks.yale.edu/book/9780300165395/too-much-know

      The new portions are the acceleration of the issue by social media and the inflammation by artificial intelligence.

  7. Aug 2022
    1. The term "stigmergy" was introduced by French biologist Pierre-Paul Grassé in 1959 to refer to termite behavior. He defined it as: "Stimulation of workers by the performance they have achieved." It is derived from the Greek words στίγμα stigma "mark, sign" and ἔργον ergon "work, action", and captures the notion that an agent’s actions leave signs in the environment, signs that it and other agents sense and that determine and incite their subsequent actions.[4][5]

      Theraulaz, Guy (1999). "A Brief History of Stigmergy". Artificial Life. 5 (2): 97–116. doi:10.1162/106454699568700. PMID 10633572. S2CID 27679536.

    1. I recall being told by a distinguishedanthropological linguist, in 1953, that he had no intention of working througha vast collection of materials that he had assembled because within a few yearsit would surely be possible to program a computer to construct a grammar froma large corpus of data by the use of techniques that were already fairly wellformalized.

      rose colored glasses...

    1. For the sake of simplicity, go to Graph Analysis Settings and disable everything but Co-Citations, Jaccard, Adamic Adar, and Label Propogation. I won't spend my time explaining each because you can find those in the net, but these are essentially algorithms that find connections for you. Co-Citations, for example, uses second order links or links of links, which could generate ideas or help you create indexes. It essentially automates looking through the backlinks and local graphs as it generates possible relations for you.
  8. Jul 2022
    1. AI text generator, a boon for bloggers? A test report

      While I wanted to investigate AI text generators further, I ended up writing a testreport.. I was quite stunned because the AI ​​text generator turns out to be able to create a fully cohesive and to-the-point article in minutes. Here is the test report.

    1. A cognitiveagent is needed to perform this very action (that needs to be recurrent)—and another agent is neededto further build on that (again recurrently and irrespective to the particular agents involved).

      This appears to be setting up the conditions for an artificial cognitive agent to be able to play a role (ie Artificial Intelligence)

    2. In this paper, we propose and analyse a potential power triangle between three kinds of mutuallydependent, mutually threatening and co-evolving cognitive systems—the human being, the socialsystem and the emerging synthetic intelligence. The question we address is what configuration betweenthese powers would enable humans to start governing the global socio-econo-political system
      • Optimization problem - human beings, their social system and AI - what is optimal configuration?
    1. Superintelligence has long served as a source of inspiration for dystopian science fiction that showed humanity being overthrown, defeated, or imprisoned by machines.
  9. Jun 2022
    1. We've yet to see note-taking platforms meaningfully add AI affordances into their systems, but there are hints at how they could in other platforms.

      A promising project is Paul Bricman's Conceptarium.

    1. Dall-E delivers ten images for each request, and when you see results that contain sensitive or biased content, you can flag them to OpenAI for review. The question then becomes whether OpenAI wants Dall-E's results to reflect society's approximate reality or some idealized version. If an occupation is majority male or female, for instance, and you ask Dall-E to illustrate someone doing that job, the results can either reflect the actual proportion in society, or some even split between genders. They can also account for race, weight, and other factors. So far, OpenAI is still researching how exactly to structure these results. But as it learns, it knows it has choices to make.

      Philosophical questions for AI-generated artwork

      As if we needed more technology to dissolve a shared, cohesive view of reality, we need to consider how it is possible to tune the AI parameters to reflect some version of what is versus some version of how we want it to be.

    1. Harness collective intelligence augmented by digital technology, and unlock exponential innovation. Beyond old hierarchical structures and archaic tools.

      https://twitter.com/augmented_CI

      The words "beyond", "hierarchical", and "archaic" are all designed to marginalize prior thought and tools which all work, and are likely upon which this broader idea is built. This is a potentially toxic means of creating "power over" this prior art rather than a more open spirit of "power with".

  10. May 2022
    1. Bret Victor shared this post to make the point that we shouldn't be worrying about sentient AI right now; that the melting ice caps are way more of a threat than AGI. He linked to this article, saying that corporations act like a non-human, intelligent entity, that has real impacts in the world today, that may be way more consequential than AI.

    1. Ben Williamson shared this post on Twitter, saying that it's a good idea to remove the words 'artificial intelligence' and 'AI' from policy statements, etc. as a way of talking about specific details of a technology. We can see loads of examples of companies using 'AI' to obfuscate what they are really going.

    1. The bulk of Vumacam’s subscribers have thus far been private security companies like AI Surveillance, which supply anything from armed guards to monitoring for a wide range of clients, including schools, businesses, and residential neighborhoods. This was always the plan: Vumacam CEO Croock started AI Surveillance with Nichol shortly after founding Vumacam and then stepped away to avoid conflicts with other Vumacam customers.

      AI-driven Surveillance-as-a-Service

      Vumacam provides the platform, AI-driven target selection, and human review. Others subscribe to that service and add their own layers of services to customers.

  11. Apr 2022
    1. Since most of our feeds rely on either machine algorithms or human curation, there is very little control over what we actually want to see.

      While algorithmic feeds and "artificial intelligences" might control large swaths of what we see in our passive acquisition modes, we can and certainly should spend more of our time in active search modes which don't employ these tools or methods.

      How might we better blend our passive and active modes of search and discovery while still having and maintaining the value of serendipity in our workflows?

      Consider the loss of library stacks in our research workflows? We've lost some of the serendipity of seeing the book titles on the shelf that are adjacent to the one we're looking for. What about the books just above and below it? How do we replicate that sort of serendipity into our digital world?

      How do we help prevent the shiny object syndrome? How can stay on task rather than move onto the next pretty thing or topic presented to us by an algorithmic feed so that we can accomplish the task we set out to do? Certainly bookmarking a thing or a topic for later follow up can be useful so we don't go too far afield, but what other methods might we use? How can we optimize our random walks through life and a sea of information to tie disparate parts of everything together? Do we need to only rely on doing it as a broader species? Can smaller subgroups accomplish this if carefully planned or is exploring the problem space only possible at mass scale? And even then we may be under shooting the goal by an order of magnitude (or ten)?

    1. ResearchRabbit, which fully launched in August 2021, describes itself as “Spotify for papers”.

      Research Rabbit is a search engine for academic research that was launched in August of 2021 and bills itself as "Spotify for papers." It uses artificial intelligence to recommend related papers to researchers and updates those recommendations based on the contents of one's growing corpus of interest.

    2. Connected Papers uses the publicly available corpus compiled by Semantic Scholar — a tool set up in 2015 by the Allen Institute for Artificial Intelligence in Seattle, Washington — amounting to around 200 million articles, including preprints.

      Semantic Scholar is a digital tool created by the Allen Institute for Artificial Intelligence in Seattle, Washington in 2015. It's corpus is publicly available for search and is used by other tools including Connected Papers.

    1. He continues by comparing open works to Quantum mechanics, and he arrives at the conclusion that open works are more like Einstein's idea of the universe, which is governed by precise laws but seems random at first. The artist in those open works arranges the work carefully so it could be re-organized by another but still keep the original voice or intent of the artist.

      Is physics open or closed?

      Could a play, made in a zettelkasten-like structure, be performed in a way so as to keep a consistent authorial voice?

      What potential applications does the idea of opera aperta have for artificial intelligence? Can it be created in such a way as to give an artificial brain a consistent "authorial voice"?

  12. Mar 2022
    1. projet européen X5-GON (Global Open Education Network) qui collecte les informations sur les ressources éducatives libres et qui marche bien avec un gros apport d’intelligence artificielle pour analyser en profondeur les documents
    1. This generative model normally penalizes predicted toxicity and rewards predicted target activity. We simply proposed to invert this logic by using the same approach to design molecules de novo, but now guiding the model to reward both toxicity and bioactivity instead.

      By changing the parameters of the AI, the output of the AI changed dramatically.

    1. Of course, users are still the source of the insight that makes a complete document also a compelling document.

      Nice that he takes a more humanistic viewpoint here rather than indicating that it will all be artificial intelligence in the future.

  13. Feb 2022
    1. Stay at the forefront of educational innovation

      What about a standard of care for students?

      Bragging about students not knowing how the surveillance technology works is unethical.<br><br>Students using accessibility software or open educational resources shouldn't be punished for accidentally avoiding surveillance. pic.twitter.com/Uv7fiAm0a3

      — Ian Linkletter (@Linkletter) February 22, 2022
      <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>

      #annotation https://t.co/wVemEk2yao

      — Remi Kalir (@remikalir) February 23, 2022
      <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
    1. At the back of Dr Duncan's book on the topic, Index, A History Of The, he includes not one but two indexes, in order to make a point.

      Dennis Duncan includes two indices in his book Index, A History of The, one by a professional human indexer and the second generated by artificial intelligence. He indicates that the human version is far better.

    1. We need to getour thoughts on paper first and improve them there, where we canlook at them. Especially complex ideas are difficult to turn into alinear text in the head alone. If we try to please the critical readerinstantly, our workflow would come to a standstill. We tend to callextremely slow writers, who always try to write as if for print,perfectionists. Even though it sounds like praise for extremeprofessionalism, it is not: A real professional would wait until it wastime for proofreading, so he or she can focus on one thing at a time.While proofreading requires more focused attention, finding the rightwords during writing requires much more floating attention.

      Proofreading while rewriting, structuring, or doing the thinking or creative parts of writing is a form of bikeshedding. It is easy to focus on the small and picayune fixes when writing, but this distracts from the more important parts of the work which really need one's attention to be successful.

      Get your ideas down on paper and only afterwards work on proofreading at the end. Switching contexts from thinking and creativity to spelling, small bits of grammar, and typography can be taxing from the perspective of trying to multi-task.


      Link: Draft #4 and using Webster's 1913 dictionary for choosing better words/verbiage as a discrete step within the rewrite.


      Linked to above: Are there other dictionaries, thesauruses, books of quotations, or individual commonplace books, waste books that can serve as resources for finding better words, phrases, or phrasing when writing? Imagine searching through Thoreau's commonplace book for finding interesting turns of phrase. Naturally searching through one's own commonplace book is a great place to start, if you're saving those sorts of things, especially from fiction.

      Link this to Robin Sloan's AI talk and using artificial intelligence and corpuses of literature to generate writing.

  14. Jan 2022
    1. https://vimeo.com/232545219

      from: Eyeo Conference 2017

      Description

      Robin Sloan at Eyeo 2017 | Writing with the Machine | Language models built with recurrent neural networks are advancing the state of the art on what feels like a weekly basis; off-the-shelf code is capable of astonishing mimicry and composition. What happens, though, when we take those models off the command line and put them into an interactive writing environment? In this talk Robin presents demos of several tools, including one presented here for the first time. He discusses motivations and process, shares some technical tips, proposes a course for the future — and along the way, write at least one short story together with the audience: all of us, and the machine.

      Notes

      Robin created a corpus using If Magazine and Galaxy Magazine from the Internet Archive and used it as a writing tool. He talks about using a few other models for generating text.

      Some of the idea here is reminiscent of the way John McPhee used the 1913 Webster Dictionary for finding words (or le mot juste) for his work, as tangentially suggested in Draft #4 in The New Yorker (2013-04-22)

      Cross reference: https://hypothes.is/a/t2a9_pTQEeuNSDf16lq3qw and https://hypothes.is/a/vUG82pTOEeu6Z99lBsrRrg from https://jsomers.net/blog/dictionary


      Croatian acapella singing: klapa https://www.youtube.com/watch?v=sciwtWcfdH4


      Writing using the adjacent possible.


      Corpus building as an art [~37:00]

      Forgetting what one trained their model on and then seeing the unexpected come out of it. This is similar to Luhmann's use of the zettelkasten as a serendipitous writing partner.

      Open questions

      How might we use information theory to do this more easily?

      What does a person or machine's "hand" look like in the long term with these tools?

      Can we use corpus linguistics in reverse for this?

      What sources would you use to train your model?

      References:

      • Andrej Karpathy. 2015. "The Unreasonable Effectiveness of Recurrent Neural Networks"
      • Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, et al. "Generating sentences from a continuous space." 2015. arXiv: 1511.06349
      • Stanislau Semeniuta, Aliaksei Severyn, and Erhardt Barth. 2017. "A Hybrid Convolutional Variational Autoencoder for Text generation." arXiv:1702.02390
      • Soroush Mehri, et al. 2017. "SampleRNN: An Unconditional End-to-End Neural Audio Generation Model." arXiv:1612.07837 applies neural networks to sound and sound production
    1. Markoff, a long-time chronicler of computing, sees Engelbart as one pole in a decades-long competition "between artificial intelligence and intelligence augmentation -- A.I. versus I.A."

      There is an interesting difference between artificial intelligence and intelligence automation. Index cards were already doing the second by the early 1940s.

  15. Dec 2021
  16. Nov 2021
  17. Oct 2021
  18. Sep 2021
  19. Aug 2021
    1. Provide more opportunities for new talent. Because healthcare has been relatively solid and stagnant in what it does, we're losing out on some of the new talent that comes out — who are developing artificial intelligence, who are working at high-tech firms — and those firms can pay significantly higher than hospitals for those talents. We have to find a way to provide some opportunities for that and apply those technologies to make improvements in healthcare.

      Intestesing. Mr. Roach thinks healthcare is not doing enough to attract new types of talent (AI and emerging tech) into healthcare. We seem to be losing this talent to the technology sector.

      I would agree with this point. Why work for healthcare with all of its massive demands and HIPPA and lack of people knowing what you are even building. Instead, you can go into tech, have a better quality of life, get paid so much more, and have the possibility of exiting due to a buyout from the healthcare industry.

    1. Building on platforms' stores of user-generated content, competing middleware services could offer feeds curated according to alternate ranking, labeling, or content-moderation rules.

      Already I can see too many companies relying on artificial intelligence to sort and filter this material and it has the ability to cause even worse nth degree level problems.

      Allowing the end user to easily control the content curation and filtering will be absolutely necessary, and even then, customer desire to do this will likely loose out to the automaticity of AI. Customer laziness will likely win the day on this, so the design around it must be robust.

  20. Jul 2021
    1. <small><cite class='h-cite via'> <span class='p-author h-card'>John Pavlus </span> in Melanie Mitchell Trains AI to Think With Analogies | Quanta Magazine (<time class='dt-published'>07/24/2021 17:19:52</time>)</cite></small>

    1. Facebook AI. (2021, July 16). We’ve built and open-sourced BlenderBot 2.0, the first #chatbot that can store and access long-term memory, search the internet for timely information, and converse intelligently on nearly any topic. It’s a significant advancement in conversational AI. https://t.co/H17Dk6m1Vx https://t.co/0BC5oQMEck [Tweet]. @facebookai. https://twitter.com/facebookai/status/1416029884179271684

  21. Jun 2021
    1. t hadn’t learned sort of the concept of a paddle or the concept of a ball. It only learned about patterns of pixels.

      Cognition and perception are closely related in humans, as the theory of embodied cognition has shown. But until the concept of embodied cognition gained traction, we had developed a pretty intellectual concept of cognition: as something located in our brains, drained of emotions, utterly rational, deterministic, logical, and so on. This is still the concept of intelligence that rules research in AI.

    2. the original goal at least, was to have a machine that could be like a human, in that the machine could do many tasks and could learn something in one domain, like if I learned how to play checkers maybe that would help me learn better how to play chess or other similar games, or even that I could use things that I’d learned in chess in other areas of life, that we sort of have this ability to generalize the things that we know or the things that we’ve learned and apply it to many different kinds of situations. But this is something that’s eluded AI systems for its entire history.

      The truth is we do not need to have computers to excel in the things we do best, but to complement us. We shall bet on cognitive extension instead of trying to re-create human intelligence --which is a legitimate area of research, but computer scientists should leave this to cognitive science and neuroscience.

    1. Last year, Page told a convention of scientists that Google is “really trying to build artificial intelligence and to do it on a large scale.”

      What if they're not? What if they're building an advertising machine to manipulate us into giving them all our money?

      From an investor perspective, the artificial answer certainly seems sexy while using some clever legerdemain to keep the public from seeing what's really going on behind the curtain?

    2. It seeks to develop “the perfect search engine,” which it defines as something that “understands exactly what you mean and gives you back exactly what you want.”

      What if we want more serendipity? What if we don't know what we really want? Where is this in their system?

  22. May 2021
    1. Turing was an exceptional mathematician with a peculiar and fascinating personality and yet he remains largely unknown. In fact, he might be considered the father of the von Neumann architecture computer and the pioneer of Artificial Intelligence. And all thanks to his machines; both those that Church called “Turing machines” and the a-, c-, o-, unorganized- and p-machines, which gave rise to evolutionary computations and genetic programming as well as connectionism and learning. This paper looks at all of these and at why he is such an often overlooked and misunderstood figure.
  23. Apr 2021
    1. There is a tendency in short luck-heavy games to require you to play multiple rounds in one sitting, to balance the scores. This is one such game. This multiple-rounds "mechanic" feels like an artificial fix for the problem of luck. Saboteur 1 and 2 advise the same thing because the different roles in the game are not balanced. ("Oh, well. I had the bad luck to draw the Profiteer character this time. Maybe I'll I'll draw a more useful character in round 2.") This doesn't change the fact that you are really playing a series of short unbalanced games. Scores will probably even out... statistically speaking. The Lost Cities card game tries to deal with the luck-problem in the same way.

      possibly rename: games: luck: managing/mitigating the luck to games: luck: dealing with/mitigating the luck problem

    1. The insertion of an algorithm’s predictions into the patient-physician relationship also introduces a third party, turning the relationship into one between the patient and the health care system. It also means significant changes in terms of a patient’s expectation of confidentiality. “Once machine-learning-based decision support is integrated into clinical care, withholding information from electronic records will become increasingly difficult, since patients whose data aren’t recorded can’t benefit from machine-learning analyses,” the authors wrote.

      There is some work being done on federated learning, where the algorithm works on decentralised data that stays in place with the patient and the ML model is brought to the patient so that their data remains private.

  24. Mar 2021
    1. In this respect, we join Fitzpatrick (2011) in exploring “the extent to which the means of media production and distribution are undergoing a process of radical democratization in the Web 2.0 era, and a desire to test the limits of that democratization”

      Something about this is reminiscent of WordPress' mission to democratize publishing. We can also compare it to Facebook whose (stated) mission is to connect people, while it's actual mission is to make money by seemingly radicalizing people to the extremes of our political spectrum.

      This highlights the fact that while many may look at content moderation on platforms like Facebook as removing their voices or deplatforming them in the case of people like Donald J. Trump or Alex Jones as an anti-democratic move. In fact it is not. Because of Facebooks active move to accelerate extreme ideas by pushing them algorithmically, they are actively be un-democratic. Democratic behavior on Facebook would look like one voice, one account and reach only commensurate with that person's standing in real life. Instead, the algorithmic timeline gives far outsized influence and reach to some of the most extreme voices on the platform. This is patently un-democratic.

    1. Meanwhile, the algorithms that recommend this content still work to maximize engagement. This means every toxic post that escapes the content-moderation filters will continue to be pushed higher up the news feed and promoted to reach a larger audience.

      This and the prior note are also underpinned by the fact that only 10% of people are going to be responsible for the majority of posts, so if you can filter out the velocity that accrues to these people, you can effectively dampen down the crazy.

    2. In his New York Times profile, Schroepfer named these limitations of the company’s content-moderation strategy. “Every time Mr. Schroepfer and his more than 150 engineering specialists create A.I. solutions that flag and squelch noxious material, new and dubious posts that the A.I. systems have never seen before pop up—and are thus not caught,” wrote the Times. “It’s never going to go to zero,” Schroepfer told the publication.

      The one thing many of these types of noxious content WILL have in common are the people at the fringes who are regularly promoting it. Why not latch onto that as a means of filtering?

    3. But anything that reduced engagement, even for reasons such as not exacerbating someone’s depression, led to a lot of hemming and hawing among leadership. With their performance reviews and salaries tied to the successful completion of projects, employees quickly learned to drop those that received pushback and continue working on those dictated from the top down.

      If the company can't help regulate itself using some sort of moral compass, it's imperative that government or other outside regulators should.

    4. <small><cite class='h-cite via'> <span class='p-author h-card'>Joan Donovan, PhD</span> in "This is just some of the best back story I’ve ever read. Facebooks web of influence unravels when @_KarenHao pulls the wrong thread. Sike!! (Only the Boston folks will get that.)" / Twitter (<time class='dt-published'>03/14/2021 12:10:09</time>)</cite></small>

  25. Feb 2021
    1. The result was a mother, soft, warm, and tender, a mother with infinite patience, a mother available twenty-four hours a day, a mother that never scolded her infant and never struck or bit her baby in anger. Furthermore, we designed a mother-machine with maximal maintenance efficiency since failure of any system or function could be resolved by the simple substitution of black boxes and new component parts. It is our opinion that we engineered a very superior monkey mother, although this position is not held universally by the monkey fathers.

      Finding the importance the monkeys senses were to thriving the development of this surrogate mother figure was able to demonstrate that it was more than just the need for milk that the infant monkeys craved.

  26. Jan 2021
  27. Dec 2020
  28. Nov 2020
    1. The real heart of the matter of selection, however, goes deeper than a lag in the adoption of mechanisms by libraries, or a lack of development of devices for their use. Our ineptitude in getting at the record is largely caused by the artificiality of systems of indexing. When data of any sort are placed in storage, they are filed alphabetically or numerically, and information is found (when it is) by tracing it down from subclass to subclass. It can be in only one place, unless duplicates are used; one has to have rules as to which path will locate it, and the rules are cumbersome. Having found one item, moreover, one has to emerge from the system and re-enter on a new path.

      Bush emphasises the importance of retrieval in the storage of information. He talks about technical limitations, but in this paragraph he stresses that retrieval is made more difficult by the "artificiality of systems of indexing", in other words, our default file-cabinet metaphor for storing information.

      Information in such a hierarchical architecture is found by descending down into the hierarchy, and back up again. Moreover, the information we're looking for can only be in one place at a time (unless we introduce duplicates).

      Having found our item of interest, we need to ascend back up the hierarchy to make our next descent.

    1. I'm still calling this v1.00 as this is what will be included in the first print run.

      There seems to be an artificial pressure and a false assumption that the version that gets printed and included in the box be the "magic number" 1.00.

      But I think there is absolutely nothing bad or to be ashamed of to have the version number printed in the rule book be 1.47 or even 2.0. (Or, of course, you could just not print it at all.) It's just being transparent/honest about how many versions/revisions you've made. 

  29. Oct 2020
    1. Similarly, technology can help us control the climate, make AI safe, and improve privacy.

      regulation needs to surround the technology that will help with these things

    1. What if you could use AI to control the content in your feed? Dialing up or down whatever is most useful to you. If I’m on a budget, maybe I don’t want to see photos of friends on extravagant vacations. Or, if I’m trying to pay more attention to my health, encourage me with lots of salads and exercise photos. If I recently broke up with somebody, happy couple photos probably aren’t going to help in the healing process. Why can’t I have control over it all, without having to unfollow anyone. Or, opening endless accounts to separate feeds by topic. And if I want to risk seeing everything, or spend a week replacing my usual feed with images from a different culture, country, or belief system, couldn’t I do that, too? 

      Some great blue sky ideas here.

    1. Walter Pitts was pivotal in establishing the revolutionary notion of the brain as a computer, which was seminal in the development of computer design, cybernetics, artificial intelligence, and theoretical neuroscience. He was also a participant in a large number of key advances in 20th-century science.
  30. Sep 2020
    1. synthesize

      To synthesize in definition is create something chemically. This means that if out of 118 elements, 20 of those are man-made via a nuclear reactor and/or a particle accelerator. These elements are unstable because they are built upon fusing an Atom's nucleus with more proton's than it may usually have which causes the stability to become dangerously chaotic as it is not natural for the element. This is the building block for the Atomic Bombs creation.

    1. Since re-rendering in Svelte happens at a more granular level than the component, there is no artificial pressure to create smaller components than would be naturally desirable, and in fact (because one-component-per-file) there is pressure in the opposite direction. As such, large components are not uncommon.
  31. Aug 2020
  32. Jul 2020
  33. Jun 2020
    1. each of them flows through each of the two layers of the encoder

      each of them flows through each of the two layers of EACH encoder, right?

    1. It made it challenging for the models to deal with long sentences.

      This is similar to autoencoders struggling with producing high-resolution imagery because of the compression that happens in the latent space, right?

    1. it seems that word-level models work better than character-level models

      Interesting, if you think about it, both when we as humans read and write, we think in terms of words or even phrases, rather than characters. Unless we're unsure how to spell something, the characters are a secondary thought. I wonder if this is at all related to the fact that word-level models seem to work better than character-level models.

    2. As you can see above, sometimes the model tries to generate latex diagrams, but clearly it hasn’t really figured them out.

      I don't think anyone has figured latex diagrams (tikz) out :')

    3. Antichrist

      uhhh should we be worried

    1. We only forget when we’re going to input something in its place. We only input new values to the state when we forget something older.

      seems like a decision aiming for efficiency

    2. outputs a number between 000 and 111 for each number in the cell state Ct−1Ct−1C_{t-1}

      remember, each line represents a vector.

  34. May 2020
    1. Mei, X., Lee, H.-C., Diao, K., Huang, M., Lin, B., Liu, C., Xie, Z., Ma, Y., Robson, P. M., Chung, M., Bernheim, A., Mani, V., Calcagno, C., Li, K., Li, S., Shan, H., Lv, J., Zhao, T., Xia, J., … Yang, Y. (2020). Artificial intelligence for rapid identification of the coronavirus disease 2019 (COVID-19). MedRxiv, 2020.04.12.20062661. https://doi.org/10.1101/2020.04.12.20062661

    1. Results reveal a significant shift in the gut microbiome and metabolome within one day following morphine treatment compared to that observed after placebo. Morphine-induced gut microbial dysbiosis exhibited distinct characteristic signatures, including significant increase in communities associated with pathogenic function, decrease in communities associated with stress tolerance and significant impairment in bile acids and morphine-3-glucuronide/morphine biotransformation in the gut.

      Unsurprisingly, various substances appear to disrupt the microbiome; artificial sweeteners are not unique. Given that I don't worry about opioids, I probably shouldn't worry about sweeteners.

      However, opioids are known for causing constipation. That is to say, they have a clear effect on digestion. Perhaps I should worry about opioids rather than not worry about sweeteners.

  35. Apr 2020
    1. Although it has been proposed that NNS do not affect glycemia (3), data from several recent studies suggest that NNS are not physiologically inert. First, it has been demonstrated that the gastrointestinal tract (4,5) and the pancreas (6,7) can detect sugars through taste receptors and transduction mechanisms that are similar to those indentified in taste cells in the mouth. Second, NNS-induced activation of gut sweet taste receptors in isolated duodenal L cells and pancreatic β-cells triggers the secretion of glucagon-like peptide 1 (GLP-1) (4,5) and insulin (6–9), respectively. Third, data from studies conducted in animal models demonstrate that NNS interact with sweet taste receptors expressed in enteroendocrine cells to increase both active and passive intestinal glucose absorption by upregulating the expression of sodium-dependent glucose transporter isoform 1 (5,10,11) and increasing the translocation of GLUT2 to the apical membrane of intestinal epithelia (12).

      This supports my previous assertion that the effects of artificial sweeteners on the microbiome are taste-mediated. However, I did not predict the intestinal taste receptors. That means that my previous way to falsify the claim, such as delivery by oral gavage, is no longer adequate. Nonetheless, interesting things could be learned from such tests.

    1. These variations were related to inflammation in the host

      In which direction? This statement makes me wonder if inflammation caused the changes in the microbiome.

      It seems possible that the sweetness itself is the ultimate cause. To test this, a study using oral gavage. It's easily plausible that the flavor alerts dietary patterns (I believe humans eat more calories in response to sweeteners, will need to check on source). Alternatively, direct effects on the brain, and downstream effects on the body, is also not out of the question.

      The reason I suspect taste-mediated effects is that it seems unlikely that so many completely unrelated sweeteners would have such similar effects. However, one might might expect more similar results than those found if it were the case (or the dose is so high that the taste changes for some, e.g. saccharin).

    1. Abdulla, A., Wang, B., Qian, F., Kee, T., Blasiak, A., Ong, Y. H., Hooi, L., Parekh, F., Soriano, R., Olinger, G. G., Keppo, J., Hardesty, C. L., Chow, E. K., Ho, D., & Ding, X. (n.d.). Project IDentif.AI: Harnessing Artificial Intelligence to Rapidly Optimize Combination Therapy Development for Infectious Disease Intervention. Advanced Therapeutics, n/a(n/a), 2000034. https://doi.org/10.1002/adtp.202000034

  36. Dec 2019
    1. Alexander Samuel reflects on tagging and its origins as a backbone to the social web. Along with RSS, tags allowed users to connect and collate content using such tools as feed readers. This all changed with the advent of social media and the algorithmically curated news feed.

      Tags were used for discovery of specific types of content. Who needs that now that our new overlords of artificial intelligence and algorithmic feeds can tell us what we want to see?!

      Of course we still need tags!!! How are you going to know serendipitously that you need more poetry in your life until you run into the tag on a service like IndieWeb.xyz? An algorithmic feed is unlikely to notice--or at least in my decade of living with them I've yet to run into poetry in one.

  37. Aug 2019
    1. so there won’t be a blinking bunny, at least not yet, let’s train our bunny to blink on command by mixing stimuli ( the tone and the air puff)

      Is it just that how we all learn and evolve? 😲

    1. Em 2015, o serviço de streaming de música Spotify criou a playlist chamada Descobertas da Semana, que funciona como uma curadoria digital. O algoritmo responsável por esta playlist utiliza técnicas de Filtragem Colaborativa, Processamento de Linguagem Natural e Processamento de Sinais de Áudio através de Redes Neurais Convolucionais para compor a playlist semanalmente.[33]
    1. A notable by-product of a move of clinical as well as research data to the cloud would be the erosion of market power of EMR providers.

      But we have to be careful not to inadvertently favour the big tech companies in trying to stop favouring the big EMR providers.

    2. cloud computing is provided by a small number of large technology companies who have both significant market power and strong commercial interests outside of healthcare for which healthcare data might potentially be beneficial

      AI is controlled by these external forces. In what direction will this lead it?

    3. it has long been argued that patients themselves should be the owners and guardians of their health data and subsequently consent to their data being used to develop AI solutions.

      Mere consent isn't enough. We consent to give away all sorts of data for phone apps that we don't even really consider. We need much stronger awareness, or better defaults so that people aren't sharing things without proper consideration.

    4. To realize this vision and to realize the potential of AI across health systems, more fundamental issues have to be addressed: who owns health data, who is responsible for it, and who can use it? Cloud computing alone will not answer these questions—public discourse and policy intervention will be needed.

      This is part of the habit and culture of data use. And it's very different in health than in other sectors, given the sensitivity of the data, among other things.

    5. In spite of the widely touted benefits of “data liberation”,15 a sufficiently compelling use case has not been presented to overcome the vested interests maintaining the status quo and justify the significant upfront investment necessary to build data infrastructure.

      Advancing AI requires more than just AI stuff. It requires infrastructure and changes in human habit and culture.

    6. However, clinician satisfaction with EMRs remains low, resulting in variable completeness and quality of data entry, and interoperability between different providers remains elusive.11

      Another issue with complex systems: the data can be volumous but poor individual quality, relying on domain knowledge to be able to properly interpret (eg. that doctor didn't really prescribe 10x the recommended dose. It was probably an error.).

    7. Second, most healthcare organizations lack the data infrastructure required to collect the data needed to optimally train algorithms to (a) “fit” the local population and/or the local practice patterns, a requirement prior to deployment that is rarely highlighted by current AI publications, and (b) interrogate them for bias to guarantee that the algorithms perform consistently across patient cohorts, especially those who may not have been adequately represented in the training cohort.9

      AI depends on:

      • static processes - if the population you are predicting changes relative to the one used to train the model, all bets are off. It remains to be seen how similar they need to be given the brittleness of AI algorithms.
      • homogeneous population - beyond race, what else is important? If we don't have a good theory of health, we don't know.
    8. Simply adding AI applications to a fragmented system will not create sustainable change.
    1. Both artists, through annotation, have produced new forms of public dialogue in response to other people (like Harvey Weinstein), texts (The New York Times), and ideas (sexual assault and racial bias) that are of broad social and political consequence.

      What about examples of future sorts of annotations/redactions like these with emerging technologies? Stories about deepfakes (like Obama calling Trump a "dipshit" or the Youtube Channel Bad Lip Reading redubbing the words of Senator Ted Cruz) are becoming more prevalent and these are versions of this sort of redaction taken to greater lengths. At present, these examples are obviously fake and facetious, but in short order they will be indistinguishable and more commonplace.

  38. Jun 2019
    1. The term first appeared in 1984 as the topic of a public debate at the annual meeting of AAAI (then called the "American Association of Artificial Intelligence"). It is a chain reaction that begins with pessimism in the AI community, followed by pessimism in the press, followed by a severe cutback in funding, followed by the end of serious research.[2] At the meeting, Roger Schank and Marvin Minsky—two leading AI researchers who had survived the "winter" of the 1970s—warned the business community that enthusiasm for AI had spiraled out of control in the 1980s and that disappointment would certainly follow. Three years later, the billion-dollar AI industry began to collapse.
    1. We Need to Talk, AIDr. Julia SchneiderLena Kadriye ZiyalA Comic Essay on Artificial Intelligence

      Un ensayo en cómic: esto pinta muy pero muy bien

  39. May 2019
    1. Deepmachinelearning,whichisusingalgorithmstoreplicatehumanthinking,ispredicatedonspecificvaluesfromspecifickindsofpeople—namely,themostpowerfulinstitutionsinsocietyandthosewhocontrolthem.

      This reminds me of this Reddit page

      The page takes pictures and texts from other Reddit pages and uses it to create computer generated posts and comments. It is interesting to see the intelligence and quality of understanding grow as it gathers more and more information.