297 Matching Annotations
  1. Last 7 days
    1. Australia's Cyber Security Strategy: $1.66 billion dollar cyber security package = AFP gets $88 million; $66 million to critical infrastructure organisations to assess their networks for vulnerabilities; ASD $1.35 billion (over a decade) to recruit 500 officers.

      Reasons Dutton gives for package:

      • child exploitation
      • criminals scamming, ransomware
      • foreign governments taking health data and potential attacks to critical infrastructure

      What is defined as critical infrastructure is expanded and subject to obligations to improve their defences.

      Supporting cyber resilience of SMEs through information, training, and services to make them more secure.

  2. Oct 2020
    1. Similarly, technology can help us control the climate, make AI safe, and improve privacy.

      regulation needs to surround the technology that will help with these things

    1. What if you could use AI to control the content in your feed? Dialing up or down whatever is most useful to you. If I’m on a budget, maybe I don’t want to see photos of friends on extravagant vacations. Or, if I’m trying to pay more attention to my health, encourage me with lots of salads and exercise photos. If I recently broke up with somebody, happy couple photos probably aren’t going to help in the healing process. Why can’t I have control over it all, without having to unfollow anyone. Or, opening endless accounts to separate feeds by topic. And if I want to risk seeing everything, or spend a week replacing my usual feed with images from a different culture, country, or belief system, couldn’t I do that, too? 

      Some great blue sky ideas here.

    1. Walter Pitts was pivotal in establishing the revolutionary notion of the brain as a computer, which was seminal in the development of computer design, cybernetics, artificial intelligence, and theoretical neuroscience. He was also a participant in a large number of key advances in 20th-century science.
  3. Sep 2020
    1. doivent consulter des oracles

      Par exemple, pour entraîner une intelligence artificielle, le philosophe montréalais Martin Gibert propose de montrer aux IA des exemples, des modèles à suivre (des Greta et des Mère Theresa) plutôt que d’essayer de leur enseigner les concepts de la philosophie morale.

  4. Aug 2020
    1. Advantages of people in [[Silicon Valley]]:** super smart but not necessarily highly educated so they don’t just believe what everyone else does. **They think outside the box. They’re thinkers as well as people that have had to do things and pass [[reality]] tests. The only test most academics face is "can I publish this piece?"

      What differs people in Silicon Valley and typical students

  5. Jul 2020
  6. Jun 2020
    1. But tagging, alone, is still not good enough. Even our many tags become useless if/when their meaning changes (in our minds) by the time we go retrieve the data they point to. This could be years after we tagged something. Somehow, whether manually or automatically, we need agents and tools to help us keep our tags updated and relevant.

      search engines usually can surface that faster (less cognitive load than recalling what and where you store something) than you retrieve it in your second brain (abundance info, do can always retrieve from external source in a JIT fashion)

    1. each of them flows through each of the two layers of the encoder

      each of them flows through each of the two layers of EACH encoder, right?

    1. It made it challenging for the models to deal with long sentences.

      This is similar to autoencoders struggling with producing high-resolution imagery because of the compression that happens in the latent space, right?

    1. it seems that word-level models work better than character-level models

      Interesting, if you think about it, both when we as humans read and write, we think in terms of words or even phrases, rather than characters. Unless we're unsure how to spell something, the characters are a secondary thought. I wonder if this is at all related to the fact that word-level models seem to work better than character-level models.

    2. As you can see above, sometimes the model tries to generate latex diagrams, but clearly it hasn’t really figured them out.

      I don't think anyone has figured latex diagrams (tikz) out :')

    3. Antichrist

      uhhh should we be worried

    1. We only forget when we’re going to input something in its place. We only input new values to the state when we forget something older.

      seems like a decision aiming for efficiency

    2. outputs a number between 000 and 111 for each number in the cell state Ct−1Ct−1C_{t-1}

      remember, each line represents a vector.

    1. Just as journalists should be able to write about anything they want, comedians should be able to do the same and tell jokes about anything they please

      where's the line though? every output generates a feedback loop with the hivemind, turning into input to ourselves with our cracking, overwhelmed, filters

      it's unrealistic to wish everyone to see jokes are jokes, to rely on journalists to generate unbiased facts, and politicians as self serving leeches, err that's my bias speaking

  7. May 2020
    1. Mei, X., Lee, H.-C., Diao, K., Huang, M., Lin, B., Liu, C., Xie, Z., Ma, Y., Robson, P. M., Chung, M., Bernheim, A., Mani, V., Calcagno, C., Li, K., Li, S., Shan, H., Lv, J., Zhao, T., Xia, J., … Yang, Y. (2020). Artificial intelligence for rapid identification of the coronavirus disease 2019 (COVID-19). MedRxiv, 2020.04.12.20062661. https://doi.org/10.1101/2020.04.12.20062661

    1. Shweta, F., Murugadoss, K., Awasthi, S., Venkatakrishnan, A., Puranik, A., Kang, M., Pickering, B. W., O’Horo, J. C., Bauer, P. R., Razonable, R. R., Vergidis, P., Temesgen, Z., Rizza, S., Mahmood, M., Wilson, W. R., Challener, D., Anand, P., Liebers, M., Doctor, Z., … Badley, A. D. (2020). Augmented Curation of Unstructured Clinical Notes from a Massive EHR System Reveals Specific Phenotypic Signature of Impending COVID-19 Diagnosis [Preprint]. Infectious Diseases (except HIV/AIDS). https://doi.org/10.1101/2020.04.19.20067660

  8. Apr 2020
    1. Abdulla, A., Wang, B., Qian, F., Kee, T., Blasiak, A., Ong, Y. H., Hooi, L., Parekh, F., Soriano, R., Olinger, G. G., Keppo, J., Hardesty, C. L., Chow, E. K., Ho, D., & Ding, X. (n.d.). Project IDentif.AI: Harnessing Artificial Intelligence to Rapidly Optimize Combination Therapy Development for Infectious Disease Intervention. Advanced Therapeutics, n/a(n/a), 2000034. https://doi.org/10.1002/adtp.202000034

    1. The world’s largest exhibitions organizer, London-based Informa plc, outlined on Thursday morning a series of emergency actions it’s taking to alleviate the impact of the COVID-19 pandemic on its events business, which drives nearly two-thirds of the company’s overall revenues. Noting that the effects have been “significantly deeper, more volatile and wide-reaching,” than was initially anticipated, the company says it’s temporarily suspending dividends, cutting executive pay and issuing new shares worth about 20% of its total existing capital in an effort to strengthen its balance sheet and reduce its approximately £2.4 billion ($2.9 billion) in debt to £1.4 billion ($1.7 billion). Further, Informa says it’s engaged in “constructive discussions” with its U.S.-based debt holders over a covenant waiver agreement.

      Informa Group, que posee editoriales como Taylor & Francis, de Informa Intelligent Division toma medidas en su sector de conferencias y eventos. Provee dos tercios de sus ingresos totales, 2.9 billion dólares. Emite acciones y para el mercado norteamericano acuerdos de deuda. Mientras la parte editorial que aporta un 35% de los ingresos se mantiene sin cambios y con pronósticos estables y sólidos. Stephen Carter CEO

    1. Le public acquiert ainsi une nouvelle fonction : celle d’instance critique auquel doit s’exposer le pouvoir.

      fonction de l’espace public: un appareil critique (la critique est productrice d’espace public).

      le pouvoir, pour maintenir sa légitimité, doit être exposé à la sphère publique et se montrer à lui avec transparence; il doit pouvoir être challengé; s’il ne résiste pas à la critique publique, il ne mérite pas d’être en place.

      la possibilité de challenger l’instance publique est comparable à la publication des protocoles de sécurité utilisées dans le domaine public (ex. SSL/TLS): la sécurité des contenus encryptés tirent justement leur robustesse du fait que leur algorithme est public; quiconque pourrait le challenger à tout moment, si bien qu’on s’assure d’en éliminer toutes les failles (et l’intelligence collective peut être mise à contribution, le cas échéant).

    1. Des applications de visites guidées intelligentes s’appuient sur un processus de gestion des flux visiteurs (Visitor Flow Management Process, VFMP) pour les orienter vers les zones où ils sont le moins nombreux. Il s’agira alors de combiner les données sur l’affluence en temps réel pour chaque espace avec les souhaits et les goûts des visiteurs pour suggérer le parcours personnalisé idéal

      Argument en faveur de l'IA qui permet bien de gérer le flux mais ajoute un second bénéfice : proposer un parcours idéal. Ce bénéfice supplémentaire peut être considéré comme un argument réthorique de type Logos.

    2. Certaines technologies intelligentes utilisées dans d’autres secteurs pourraient être transposées dans les musées. Avec le big data, il est possible de connaître l’affluence en fonction des dates et des horaires, les types de visiteurs selon les jours et les périodes, ou la durée de visite moyenne par rapport différents paramètres comme la météo.

      Argument épistémique inductif et réthorique de type logos.

      On passe à l'intelligence artificielle, technologie de pointe. Apporte du crédit à l'affirmation du bénéfice du numérique.

  9. Feb 2020
    1. visual are processed 60,000 times faster in the brain than text and visual aids in the classroom improve learning up to 400 percent. Ideas presented graphically are easier to understand and remember than those presented as words, (Kliegel et al., 1987).

      throw out this factoid when doing video?

  10. Dec 2019
    1. Ranking the intelligence of animals seems an increasingly pointless exercise when one considers the really important thing: how well that animal is adapted to its niche
    1. “NextNow Collaboratory is an interesting example of a new kind of collective intelligence: an Internet-enabled, portable social network, easily transferable from one social cause to another.”

      Sense Collective's TotemSDK brings together tools, protocols, platform integrations and best practices for extending collective intelligence beyond our current capabilities. A number of cryptographic primitives have emerged which support the amazing work of projects like the NextNow Collaboratory in exciting ways that help to upgrade the general purpose social computing substrate which make tools like hypothes.is so valuable.

    1. A natural language provides its user with a ready-made structure of concepts that establishes a basic mental structure, and that allows relatively flexible, general-purpose concept structuring. Our concept of language as one of the basic means for augmenting the human intellect embraces all of the concept structuring which the human may make use of.
    2. It has been jokingly suggested several times during the course of this study that what we are seeking is an "intelligence amplifier." (The term is attributed originally to W. Ross Ashby[2,3]. At first this term was rejected on the grounds that in our view one's only hope was to make a better match between existing human intelligence and the problems to be tackled, rather than in making man more intelligent. But deriving the concepts brought out in the preceding section has shown us that indeed this term does seem applicable to our objective. 2c2a Accepting the term "intelligence amplification" does not imply any attempt to increase native human intelligence. The term "intelligence amplification" seems applicable to our goal of augmenting the human intellect in that the entity to be produced will exhibit more of what can be called intelligence than an unaided human could; we will have amplified the intelligence of the human by organizing his intellectual capabilities into higher levels of synergistic structuring. What possesses the amplified intelligence is the resulting H-LAM/T system, in which the LAM/T augmentation means represent the amplifier of the human's intelligence.2c2b In amplifying our intelligence, we are applying the principle of synergistic structuring that was followed by natural evolution in developing the basic human capabilities. What we have done in the development of our augmentation means is to construct a superstructure that is a synthetic extension of the natural structure upon which it is built. In a very real sense, as represented by the steady evolution of our augmentation means, the development of "artificial intelligence" has been going on for centuries.
    1. This is not a new idea. It is based on the vision expounded by Vannevar Bush in his 1945 essay “As We May Think,” which conjured up a “memex” machine that would remember and connect information for us mere mortals. The concept was refined in the early 1960s by the Internet pioneer J. C. R. Licklider, who wrote a paper titled “Man-Computer Symbiosis,” and the computer designer Douglas Engelbart, who wrote “Augmenting Human Intellect.” They often found themselves in opposition to their colleagues, like Marvin Minsky and John McCarthy, who stressed the goal of pursuing artificial intelligence machines that left humans out of the loop.

      Seymour Papert, had an approach that provides a nice synthesis between these two camps, buy leveraging early childhood development to provide insights on the creation of AI.

    2. Thompson’s point is that “artificial intelligence” — defined as machines that can think on their own just like or better than humans — is not yet (and may never be) as powerful as “intelligence amplification,” the symbiotic smarts that occur when human cognition is augmented by a close interaction with computers.

      Intelligence amplification over artificial intelligence. In reality you can't get to AI until you've mastered IA.

    1. lants speak in a chemical vocabulary we can’t directly perceive or comprehend. The first important discoveries in plant communication were made in the lab in the nineteen-eighties, by isolating plants and their chemical emissions in Plexiglas chambers, but Rick Karban, the U.C. Davis ecologist, and others have set themselves the messier task of studying how plants exchange chemical signals outdoors, in a natural setting.
    1. Alexander Samuel reflects on tagging and its origins as a backbone to the social web. Along with RSS, tags allowed users to connect and collate content using such tools as feed readers. This all changed with the advent of social media and the algorithmically curated news feed.

      Tags were used for discovery of specific types of content. Who needs that now that our new overlords of artificial intelligence and algorithmic feeds can tell us what we want to see?!

      Of course we still need tags!!! How are you going to know serendipitously that you need more poetry in your life until you run into the tag on a service like IndieWeb.xyz? An algorithmic feed is unlikely to notice--or at least in my decade of living with them I've yet to run into poetry in one.

  11. Nov 2019
    1. A multimedia approach to affective learning and training can result in more life-like trainings which replicate scenarios and thus provide more targeted feedback, interventions, and experience to improve decision making and outcomes. Rating: 7/10

    1. An emotional intelligence course initiated by Google became a tool to improve mindfulness, productivity, and emotional IQ. The course has since expanded into other businesses which report that employees are coping better with stressors and challenges. Rating: 7/10 Key questions...what is the format of the course, tools etc?

  12. Sep 2019
    1. The idea of a “plant intelligence”—an intelligence that goes beyond adaptation and reaction and into the realm of active memory and decision-making—has been in the air since at least the early seventies.

      what is intelligence after all?

    2. “Trees do not have will or intention. They solve problems, but it’s all under hormonal control, and it all evolved through natural selection.”

      is having will or intention akin to having intelligence?

  13. Aug 2019
    1. so there won’t be a blinking bunny, at least not yet, let’s train our bunny to blink on command by mixing stimuli ( the tone and the air puff)

      Is it just that how we all learn and evolve? 😲

    1. HTM and SDR's - part of how the brain implements intelligence.

      "In this first introductory episode of HTM School, Matt Taylor, Numenta's Open Source Flag-Bearer, walks you through the high-level theory of Hierarchical Temporal Memory in less than 15 minutes."

    1. A notable by-product of a move of clinical as well as research data to the cloud would be the erosion of market power of EMR providers.

      But we have to be careful not to inadvertently favour the big tech companies in trying to stop favouring the big EMR providers.

    2. cloud computing is provided by a small number of large technology companies who have both significant market power and strong commercial interests outside of healthcare for which healthcare data might potentially be beneficial

      AI is controlled by these external forces. In what direction will this lead it?

    3. it has long been argued that patients themselves should be the owners and guardians of their health data and subsequently consent to their data being used to develop AI solutions.

      Mere consent isn't enough. We consent to give away all sorts of data for phone apps that we don't even really consider. We need much stronger awareness, or better defaults so that people aren't sharing things without proper consideration.

    4. To realize this vision and to realize the potential of AI across health systems, more fundamental issues have to be addressed: who owns health data, who is responsible for it, and who can use it? Cloud computing alone will not answer these questions—public discourse and policy intervention will be needed.

      This is part of the habit and culture of data use. And it's very different in health than in other sectors, given the sensitivity of the data, among other things.

    5. In spite of the widely touted benefits of “data liberation”,15 a sufficiently compelling use case has not been presented to overcome the vested interests maintaining the status quo and justify the significant upfront investment necessary to build data infrastructure.

      Advancing AI requires more than just AI stuff. It requires infrastructure and changes in human habit and culture.

    6. However, clinician satisfaction with EMRs remains low, resulting in variable completeness and quality of data entry, and interoperability between different providers remains elusive.11

      Another issue with complex systems: the data can be volumous but poor individual quality, relying on domain knowledge to be able to properly interpret (eg. that doctor didn't really prescribe 10x the recommended dose. It was probably an error.).

    7. Second, most healthcare organizations lack the data infrastructure required to collect the data needed to optimally train algorithms to (a) “fit” the local population and/or the local practice patterns, a requirement prior to deployment that is rarely highlighted by current AI publications, and (b) interrogate them for bias to guarantee that the algorithms perform consistently across patient cohorts, especially those who may not have been adequately represented in the training cohort.9

      AI depends on:

      • static processes - if the population you are predicting changes relative to the one used to train the model, all bets are off. It remains to be seen how similar they need to be given the brittleness of AI algorithms.
      • homogeneous population - beyond race, what else is important? If we don't have a good theory of health, we don't know.
    8. Simply adding AI applications to a fragmented system will not create sustainable change.
    1. Both artists, through annotation, have produced new forms of public dialogue in response to other people (like Harvey Weinstein), texts (The New York Times), and ideas (sexual assault and racial bias) that are of broad social and political consequence.

      What about examples of future sorts of annotations/redactions like these with emerging technologies? Stories about deepfakes (like Obama calling Trump a "dipshit" or the Youtube Channel Bad Lip Reading redubbing the words of Senator Ted Cruz) are becoming more prevalent and these are versions of this sort of redaction taken to greater lengths. At present, these examples are obviously fake and facetious, but in short order they will be indistinguishable and more commonplace.

  14. Jul 2019
  15. Jun 2019
    1. The term first appeared in 1984 as the topic of a public debate at the annual meeting of AAAI (then called the "American Association of Artificial Intelligence"). It is a chain reaction that begins with pessimism in the AI community, followed by pessimism in the press, followed by a severe cutback in funding, followed by the end of serious research.[2] At the meeting, Roger Schank and Marvin Minsky—two leading AI researchers who had survived the "winter" of the 1970s—warned the business community that enthusiasm for AI had spiraled out of control in the 1980s and that disappointment would certainly follow. Three years later, the billion-dollar AI industry began to collapse.
  16. May 2019
    1. Deepmachinelearning,whichisusingalgorithmstoreplicatehumanthinking,ispredicatedonspecificvaluesfromspecifickindsofpeople—namely,themostpowerfulinstitutionsinsocietyandthosewhocontrolthem.

      This reminds me of this Reddit page

      The page takes pictures and texts from other Reddit pages and uses it to create computer generated posts and comments. It is interesting to see the intelligence and quality of understanding grow as it gathers more and more information.

    1. government investments
    2. initiatives from the U.S., China, and Europ
    3. Recent Government Initiatives
    4. engagement in AI activities by academics, corporations, entrepreneurs, and the general public

      Volume of Activity

    5. Derivative Measures
    6. AI Vibrancy Index
    7. limited gender diversity in the classroom
    8. improvement in natural language
    9. the COCO leaderboard
    10. patents
    11. robot operating system downloads,
    12. he GLUE metric
    13. robot installations
    14. AI conference attendance
    15. the speed at which computers can be trained to detect objects

      Technical Performance

    16. quality of question answering

      Technical Performance

    17. changes in AI performance

      Technical Performance

    18. Technical Performance
    19. number of undergraduates studying AI

      Volume of Activity

    20. growth in venture capital funding of AI startups

      Volume of Activity

    21. percent of female applicants for AI jobs

      Volume of Activity

    22. Volume of Activity
    23. increased participation in organizations like AI4ALL and Women in Machine Learning
    24. producers of AI patents
    25. ML teaching events
    26. University course enrollment
    27. 83 percent of 2017 AI papers
    1. Methodology The classic OSINT methodology you will find everywhere is strait-forward: Define requirements: What are you looking for? Retrieve data Analyze the information gathered Pivoting & Reporting: Either define new requirements by pivoting on data just gathered or end the investigation and write the report.

      Etienne's blog! Amazing resource for OSINT; particularly focused on technical attacks.

    1. There’s a bug in the evolutionary code that makes up our brains.

      Saying it's a "bug" implies that it's bad. But something this significant likely improves our evolutionary fitness in the past. This "bug" is more of a previously-useful adaptation. Whether it's still useful or not is another question, but it might be.

  17. Apr 2019
    1. Ashley Norris is the Chief Academic Officer at ProctorU, an organization that provides online exam proctoring for schools. This article has an interesting overview of the negative side of technology advancements and what that has meant for student's ability to cheat. While the article does culminate as an ad, of sorts, for ProctorU, it is an interesting read and sparks thoughts on ProctorU's use of both human monitors for testing but also their integration of Artificial Intelligence into the process.

      Rating: 9/10.

  18. Mar 2019
    1. If you do not like the price you’re being offered when you shop, do not take it personally: many of the prices we see online are being set by algorithms that respond to demand and may also try to guess your personal willingness to pay. What’s next? A logical next step is that computers will start conspiring against us. That may sound paranoid, but a new study by four economists at the University of Bologna shows how this can happen.
    1. Worse still, even if we had the ability to take a snapshot of all of the brain’s 86 billion neurons and then to simulate the state of those neurons in a computer, that vast pattern would mean nothing outside the body of the brain that produced it. This is perhaps the most egregious way in which the IP metaphor has distorted our thinking about human functioning.

      Again, this doesn't conflict with a machine-learning or deep-learning or neural-net way of seeing IP.

    2. No ‘copy’ of the story is ever made

      Or, the copy initially made is changed over time since human "memory" is interdependent and interactive with other brain changes, whereas each bit in computer memory is independent of all other bits.

      However, machine learning probably results in interactions between bits as the learning algorithm is exposed to more training data. The values in a deep neural network interact in ways that are not so obvious. So this machine-human analogy might be getting new life with machine learning.

    3. The IP perspective requires the player to formulate an estimate of various initial conditions of the ball’s flight

      I don't see how this is true. The IP perspective depends on algorithms. There are many different algorithms to perform various tasks. Some perform reverse-kinematic calculations, but others conduct simpler, repeated steps. In computer science, this might be dynamic programming, recursive algorithms, or optimization. It seems that the IP metaphor still fits: it's just that those using the metaphor may not have updated their model of IP to be more modern.

    1. There is no wonder that AI gains popularity. A lot of facts and pros are the stimulators of such profitable growth of AI. The essential peculiarities are fully presented in the given article.

    1. You were beginning to gather that there were other symbols mixed with the words that might be part of a sentence, and that the different parts of what made a full-thought statement (your feeling about what a sentence is) were not just laid out end to end as you expected.

      This suggests that Joe is doing something almost completely unrecognizable--with language at least. I guess my assumption is that I would know what Joe was doing he'd just be doing it so quickly I wouldn't be able to follow. And he'd complete the task--a task I recognize--far more quickly than I possibly could using comparable analog technologies. Perhaps this is me saying, I buy Englebart's augmentation idea on the level of efficiency but remain skeptical or at least have yet to realize it's transformative effect on intellect itself.

    2. we provide him as much help as possible in making a plan of action. Then we give him as much help as we can in carrying it out. But we also have to allow him to change his mind at almost any point, and to want to modify his plans.

      I'm thinking about the role of AI tutors/advisors here. How often do they operate in the kind of flexible way described here. I wonder if they can without actual human intervention.

  19. Feb 2019
    1. In amplifying our intelligence, we are applying the principle of synergistic structuring that was followed by natural evolution in developing the basic human capabilities. What we have done in the development of our augmentation means is to construct a superstructure that is a synthetic extension of the natural structure upon which it is built. In a very real sense, as represented by the steady evolution of our augmentation means, the development of "artificial intelligence" has been going on for centuries.

      Engelbart explicitly noted that what he was trying to do was not just hack culture, which is what significant innovations accomplish, but to hack the process by which biological and cultural co-evolution has bootstrapped itself to this point. Culture used the capabilities provided by biological evolution -- language, thumbs, etc. -- to improve human ways of living much faster than biological evolution can do, by not just inventing, but passing along to each other and future generations the knowledge of what was invented and how to invent. Engelbart proposes an audio-visual-tactile interface to computing as a tool for consciously accelerating the scope and power of individual and collective intelligence.

    2. Our culture has evolved means for us to organize the little things we can do with our basic capabilities so that we can derive comprehension from truly complex situations, and accomplish the processes of deriving and implementing problem solutions. The ways in which human capabilities are thus extended are here called augmentation means, and we define four basic classes of them: 2a4 Artifacts—physical objects designed to provide for human comfort, for the manipulation of things or materials, and for the manipulation of symbols.2a4a Language—the way in which the individual parcels out the picture of his world into the concepts that his mind uses to model that world, and the symbols that he attaches to those concepts and uses in consciously manipulating the concepts ("thinking"). 2a4b Methodology—the methods, procedures, strategies, etc., with which an individual organizes his goal-centered (problem-solving) activity. 2a4c Training—the conditioning needed by the human being to bring his skills in using Means 1, 2, and 3 to the point where they are operationally effective. 2a4d The system we want to improve can thus be visualized as a trained human being together with his artifacts, language, and methodology. The explicit new system we contemplate will involve as artifacts computers, and computer-controlled information-storage, information-handling, and information-display devices. The aspects of the conceptual framework that are discussed here are primarily those relating to the human being's ability to make significant use of such equipment in an integrated system.

      To me, this is the most prescient of Engelbart's future visions, and the seed for future study of culture-technology co-evolution. I talked with Engelbart about this passage over the years and we agreed that although the power of the artifacts, from RAM to CPU speed to network bandwidth, had improved by the billionfold since 1962, the "softer" parts of the formula -- the language, methodology, and training -- have not advanced so much. Certainly language, training methods and pedagogy, and collaborative strategies have evolved with the growth and spread of digital media, but are still lagging. H/LAMT interests me even more today than it did thirty years ago because Engelbart unknowingly forecast the fundamental elements of what has come to be called cultural-biological co-evolution. I gave a TED talk in 2005, calling for an interdisciplinary study of human cooperation -- and obstacles to cooperation. It seems that in recent years an interdisciplinary understanding has begun to emerge. Joseph Henrich at Harvard, for one, in his recent book, The Secret of Our Success, noted:

      Drawing insights from lost European Explorers, clever chimpanzees, hunter-gatherers, cultural neuroscience, ancient bones and the human genome, Henrich shows that it’s not our general intelligence, innate brain power, or specialized mental abilities that explain our success. Instead, it’s our collective brains, which arise from a combination of our ability to learn selectively from each and our sociality. Our collective brains, which often operate outside of any individual’s conscious awareness, gradually produce increasingly complex, nuanced and subtle technological, linguistic and social products over generations.

      Tracking this back into the mist of our evolutionary past, and to the remote corners of the globe, Henrich shows how this non-genetic system of cultural inheritance has long driven human genetic evolution. By producing fire, cooking, water containers, tracking know-how, plant knowledge, words, hunting strategies and projectiles, culture-driven genetic evolution expanded our brains, shaped our anatomy and physiology, and influenced our psychology, making us into the world’s only living cultural species. Only by understanding cultural evolution, can we understand human genetic evolution.

      Henrich, Boyd, and RIcherson wrote, about the social fundamentals that distinguish human culture's methods of evolving collective intelligence in The Origin and Evolution of Culture:

      Surely, without punishment, language, technology, individual intelligence and inventiveness, ready establishment of reciprocal arrangements, prestige systems and solutions to games of coordination, our societies would take on a distinctly different cast. Thus, a major constraint on explanations of human sociality is its systemic structure