732 Matching Annotations
  1. Jul 2021
    1. <small><cite class='h-cite via'> <span class='p-author h-card'>John Pavlus </span> in Melanie Mitchell Trains AI to Think With Analogies | Quanta Magazine (<time class='dt-published'>07/24/2021 17:19:52</time>)</cite></small>

    1. Facebook AI. (2021, July 16). We’ve built and open-sourced BlenderBot 2.0, the first #chatbot that can store and access long-term memory, search the internet for timely information, and converse intelligently on nearly any topic. It’s a significant advancement in conversational AI. https://t.co/H17Dk6m1Vx https://t.co/0BC5oQMEck [Tweet]. @facebookai. https://twitter.com/facebookai/status/1416029884179271684

  2. Jun 2021
    1. intelligence collective réflexive

      Il ne s’agirait donc pas de simplement devenir collectivement «plus intelligent» (au sens d’efficace, dans un strict paradigme scientifique et technique pour accélérer le fonctionnement de l’économie), mais aussi réflexif: réfléchir aux conditions de cette société renouvelée, en proie à de nouvelles dynamiques de pouvoir extrêmement concentrées et asymétriques.

    1. t hadn’t learned sort of the concept of a paddle or the concept of a ball. It only learned about patterns of pixels.

      Cognition and perception are closely related in humans, as the theory of embodied cognition has shown. But until the concept of embodied cognition gained traction, we had developed a pretty intellectual concept of cognition: as something located in our brains, drained of emotions, utterly rational, deterministic, logical, and so on. This is still the concept of intelligence that rules research in AI.

    2. the original goal at least, was to have a machine that could be like a human, in that the machine could do many tasks and could learn something in one domain, like if I learned how to play checkers maybe that would help me learn better how to play chess or other similar games, or even that I could use things that I’d learned in chess in other areas of life, that we sort of have this ability to generalize the things that we know or the things that we’ve learned and apply it to many different kinds of situations. But this is something that’s eluded AI systems for its entire history.

      The truth is we do not need to have computers to excel in the things we do best, but to complement us. We shall bet on cognitive extension instead of trying to re-create human intelligence --which is a legitimate area of research, but computer scientists should leave this to cognitive science and neuroscience.

    1. Last year, Page told a convention of scientists that Google is “really trying to build artificial intelligence and to do it on a large scale.”

      What if they're not? What if they're building an advertising machine to manipulate us into giving them all our money?

      From an investor perspective, the artificial answer certainly seems sexy while using some clever legerdemain to keep the public from seeing what's really going on behind the curtain?

    2. It seeks to develop “the perfect search engine,” which it defines as something that “understands exactly what you mean and gives you back exactly what you want.”

      What if we want more serendipity? What if we don't know what we really want? Where is this in their system?

  3. May 2021
    1. Turing was an exceptional mathematician with a peculiar and fascinating personality and yet he remains largely unknown. In fact, he might be considered the father of the von Neumann architecture computer and the pioneer of Artificial Intelligence. And all thanks to his machines; both those that Church called “Turing machines” and the a-, c-, o-, unorganized- and p-machines, which gave rise to evolutionary computations and genetic programming as well as connectionism and learning. This paper looks at all of these and at why he is such an often overlooked and misunderstood figure.
  4. Mar 2021
    1. In this respect, we join Fitzpatrick (2011) in exploring “the extent to which the means of media production and distribution are undergoing a process of radical democratization in the Web 2.0 era, and a desire to test the limits of that democratization”

      Something about this is reminiscent of WordPress' mission to democratize publishing. We can also compare it to Facebook whose (stated) mission is to connect people, while it's actual mission is to make money by seemingly radicalizing people to the extremes of our political spectrum.

      This highlights the fact that while many may look at content moderation on platforms like Facebook as removing their voices or deplatforming them in the case of people like Donald J. Trump or Alex Jones as an anti-democratic move. In fact it is not. Because of Facebooks active move to accelerate extreme ideas by pushing them algorithmically, they are actively be un-democratic. Democratic behavior on Facebook would look like one voice, one account and reach only commensurate with that person's standing in real life. Instead, the algorithmic timeline gives far outsized influence and reach to some of the most extreme voices on the platform. This is patently un-democratic.

    1. Meanwhile, the algorithms that recommend this content still work to maximize engagement. This means every toxic post that escapes the content-moderation filters will continue to be pushed higher up the news feed and promoted to reach a larger audience.

      This and the prior note are also underpinned by the fact that only 10% of people are going to be responsible for the majority of posts, so if you can filter out the velocity that accrues to these people, you can effectively dampen down the crazy.

    2. In his New York Times profile, Schroepfer named these limitations of the company’s content-moderation strategy. “Every time Mr. Schroepfer and his more than 150 engineering specialists create A.I. solutions that flag and squelch noxious material, new and dubious posts that the A.I. systems have never seen before pop up—and are thus not caught,” wrote the Times. “It’s never going to go to zero,” Schroepfer told the publication.

      The one thing many of these types of noxious content WILL have in common are the people at the fringes who are regularly promoting it. Why not latch onto that as a means of filtering?

    3. But anything that reduced engagement, even for reasons such as not exacerbating someone’s depression, led to a lot of hemming and hawing among leadership. With their performance reviews and salaries tied to the successful completion of projects, employees quickly learned to drop those that received pushback and continue working on those dictated from the top down.

      If the company can't help regulate itself using some sort of moral compass, it's imperative that government or other outside regulators should.

    4. <small><cite class='h-cite via'> <span class='p-author h-card'>Joan Donovan, PhD</span> in "This is just some of the best back story I’ve ever read. Facebooks web of influence unravels when @_KarenHao pulls the wrong thread. Sike!! (Only the Boston folks will get that.)" / Twitter (<time class='dt-published'>03/14/2021 12:10:09</time>)</cite></small>

    1. System architects: equivalents to architecture and planning for a world of knowledge and data Both government and business need new skills to do this work well. At present the capabilities described in this paper are divided up. Parts sit within data teams; others in knowledge management, product development, research, policy analysis or strategy teams, or in the various professions dotted around government, from economists to statisticians. In governments, for example, the main emphasis of digital teams in recent years has been very much on service design and delivery, not intelligence. This may be one reason why some aspects of government intelligence appear to have declined in recent years – notably the organisation of memory.57 What we need is a skill set analogous to architects. Good architects learn to think in multiple ways – combining engineering, aesthetics, attention to place and politics. Their work necessitates linking awareness of building materials, planning contexts, psychology and design. Architecture sits alongside urban planning which was also created as an integrative discipline, combining awareness of physical design with finance, strategy and law. So we have two very well-developed integrative skills for the material world. But there is very little comparable for the intangibles of data, knowledge and intelligence. What’s needed now is a profession with skills straddling engineering, data and social science – who are adept at understanding, designing and improving intelligent systems that are transparent and self-aware58. Some should also specialise in processes that engage stakeholders in the task of systems mapping and design, and make the most of collective intelligence. As with architecture and urban planning supply and demand need to evolve in tandem, with governments and other funders seeking to recruit ‘systems architects’ or ‘intelligence architects’ while universities put in place new courses to develop them.
  5. Feb 2021
  6. Jan 2021
    1. As an opening move, I’d suggest that we could reconceptualize intelligence as NaQ (neuroacoustic quotient), or ‘the capacity to cleanly switch between different complex neuroacoustic profiles.’

      also seems more neutral and embracing the differences in [[neurodiversity]] / individual thinking vs relentless optimizing for a certain KPI (like for IQs) #[[to write]]

  7. Dec 2020
    1. création collective de sens qui est au cœur de l’intelligence humaine

      objectif de l'intelligence collective, des humanités numériques comme discipline en communauté

    2. les chercheurs en sciences humaines doivent donner l’exemple – dans leur pratique ! – d’une production de sens qui s’offre à la connaissance de la manière la plus transparente possible

      Injonction aux faiseurs de connaissance – autre morceau du programme de l’intelligence collective de Pierre Lévy?

      versant éthique?

  8. Nov 2020
  9. Oct 2020
    1. Australia's Cyber Security Strategy: $1.66 billion dollar cyber security package = AFP gets $88 million; $66 million to critical infrastructure organisations to assess their networks for vulnerabilities; ASD $1.35 billion (over a decade) to recruit 500 officers.

      Reasons Dutton gives for package:

      • child exploitation
      • criminals scamming, ransomware
      • foreign governments taking health data and potential attacks to critical infrastructure

      What is defined as critical infrastructure is expanded and subject to obligations to improve their defences.

      Supporting cyber resilience of SMEs through information, training, and services to make them more secure.

    1. Similarly, technology can help us control the climate, make AI safe, and improve privacy.

      regulation needs to surround the technology that will help with these things

    1. What if you could use AI to control the content in your feed? Dialing up or down whatever is most useful to you. If I’m on a budget, maybe I don’t want to see photos of friends on extravagant vacations. Or, if I’m trying to pay more attention to my health, encourage me with lots of salads and exercise photos. If I recently broke up with somebody, happy couple photos probably aren’t going to help in the healing process. Why can’t I have control over it all, without having to unfollow anyone. Or, opening endless accounts to separate feeds by topic. And if I want to risk seeing everything, or spend a week replacing my usual feed with images from a different culture, country, or belief system, couldn’t I do that, too? 

      Some great blue sky ideas here.

    1. Walter Pitts was pivotal in establishing the revolutionary notion of the brain as a computer, which was seminal in the development of computer design, cybernetics, artificial intelligence, and theoretical neuroscience. He was also a participant in a large number of key advances in 20th-century science.
  10. Sep 2020
    1. doivent consulter des oracles

      Par exemple, pour entraîner une intelligence artificielle, le philosophe montréalais Martin Gibert propose de montrer aux IA des exemples, des modèles à suivre (des Greta et des Mère Theresa) plutôt que d’essayer de leur enseigner les concepts de la philosophie morale.

  11. Aug 2020
    1. Advantages of people in [[Silicon Valley]]:** super smart but not necessarily highly educated so they don’t just believe what everyone else does. **They think outside the box. They’re thinkers as well as people that have had to do things and pass [[reality]] tests. The only test most academics face is "can I publish this piece?"

      What differs people in Silicon Valley and typical students

  12. Jul 2020
  13. Jun 2020
    1. But tagging, alone, is still not good enough. Even our many tags become useless if/when their meaning changes (in our minds) by the time we go retrieve the data they point to. This could be years after we tagged something. Somehow, whether manually or automatically, we need agents and tools to help us keep our tags updated and relevant.

      search engines usually can surface that faster (less cognitive load than recalling what and where you store something) than you retrieve it in your second brain (abundance info, do can always retrieve from external source in a JIT fashion)

    1. each of them flows through each of the two layers of the encoder

      each of them flows through each of the two layers of EACH encoder, right?

    1. It made it challenging for the models to deal with long sentences.

      This is similar to autoencoders struggling with producing high-resolution imagery because of the compression that happens in the latent space, right?

    1. it seems that word-level models work better than character-level models

      Interesting, if you think about it, both when we as humans read and write, we think in terms of words or even phrases, rather than characters. Unless we're unsure how to spell something, the characters are a secondary thought. I wonder if this is at all related to the fact that word-level models seem to work better than character-level models.

    2. As you can see above, sometimes the model tries to generate latex diagrams, but clearly it hasn’t really figured them out.

      I don't think anyone has figured latex diagrams (tikz) out :')

    3. Antichrist

      uhhh should we be worried

    1. We only forget when we’re going to input something in its place. We only input new values to the state when we forget something older.

      seems like a decision aiming for efficiency

    2. outputs a number between 000 and 111 for each number in the cell state Ct−1Ct−1C_{t-1}

      remember, each line represents a vector.

    1. Just as journalists should be able to write about anything they want, comedians should be able to do the same and tell jokes about anything they please

      where's the line though? every output generates a feedback loop with the hivemind, turning into input to ourselves with our cracking, overwhelmed, filters

      it's unrealistic to wish everyone to see jokes are jokes, to rely on journalists to generate unbiased facts, and politicians as self serving leeches, err that's my bias speaking

  14. May 2020
    1. Mei, X., Lee, H.-C., Diao, K., Huang, M., Lin, B., Liu, C., Xie, Z., Ma, Y., Robson, P. M., Chung, M., Bernheim, A., Mani, V., Calcagno, C., Li, K., Li, S., Shan, H., Lv, J., Zhao, T., Xia, J., … Yang, Y. (2020). Artificial intelligence for rapid identification of the coronavirus disease 2019 (COVID-19). MedRxiv, 2020.04.12.20062661. https://doi.org/10.1101/2020.04.12.20062661

    1. Shweta, F., Murugadoss, K., Awasthi, S., Venkatakrishnan, A., Puranik, A., Kang, M., Pickering, B. W., O’Horo, J. C., Bauer, P. R., Razonable, R. R., Vergidis, P., Temesgen, Z., Rizza, S., Mahmood, M., Wilson, W. R., Challener, D., Anand, P., Liebers, M., Doctor, Z., … Badley, A. D. (2020). Augmented Curation of Unstructured Clinical Notes from a Massive EHR System Reveals Specific Phenotypic Signature of Impending COVID-19 Diagnosis [Preprint]. Infectious Diseases (except HIV/AIDS). https://doi.org/10.1101/2020.04.19.20067660

  15. Apr 2020
    1. Abdulla, A., Wang, B., Qian, F., Kee, T., Blasiak, A., Ong, Y. H., Hooi, L., Parekh, F., Soriano, R., Olinger, G. G., Keppo, J., Hardesty, C. L., Chow, E. K., Ho, D., & Ding, X. (n.d.). Project IDentif.AI: Harnessing Artificial Intelligence to Rapidly Optimize Combination Therapy Development for Infectious Disease Intervention. Advanced Therapeutics, n/a(n/a), 2000034. https://doi.org/10.1002/adtp.202000034

    1. The world’s largest exhibitions organizer, London-based Informa plc, outlined on Thursday morning a series of emergency actions it’s taking to alleviate the impact of the COVID-19 pandemic on its events business, which drives nearly two-thirds of the company’s overall revenues. Noting that the effects have been “significantly deeper, more volatile and wide-reaching,” than was initially anticipated, the company says it’s temporarily suspending dividends, cutting executive pay and issuing new shares worth about 20% of its total existing capital in an effort to strengthen its balance sheet and reduce its approximately £2.4 billion ($2.9 billion) in debt to £1.4 billion ($1.7 billion). Further, Informa says it’s engaged in “constructive discussions” with its U.S.-based debt holders over a covenant waiver agreement.

      Informa Group, que posee editoriales como Taylor & Francis, de Informa Intelligent Division toma medidas en su sector de conferencias y eventos. Provee dos tercios de sus ingresos totales, 2.9 billion dólares. Emite acciones y para el mercado norteamericano acuerdos de deuda. Mientras la parte editorial que aporta un 35% de los ingresos se mantiene sin cambios y con pronósticos estables y sólidos. Stephen Carter CEO

    1. Le public acquiert ainsi une nouvelle fonction : celle d’instance critique auquel doit s’exposer le pouvoir.

      fonction de l’espace public: un appareil critique (la critique est productrice d’espace public).

      le pouvoir, pour maintenir sa légitimité, doit être exposé à la sphère publique et se montrer à lui avec transparence; il doit pouvoir être challengé; s’il ne résiste pas à la critique publique, il ne mérite pas d’être en place.

      la possibilité de challenger l’instance publique est comparable à la publication des protocoles de sécurité utilisées dans le domaine public (ex. SSL/TLS): la sécurité des contenus encryptés tirent justement leur robustesse du fait que leur algorithme est public; quiconque pourrait le challenger à tout moment, si bien qu’on s’assure d’en éliminer toutes les failles (et l’intelligence collective peut être mise à contribution, le cas échéant).

    1. Des applications de visites guidées intelligentes s’appuient sur un processus de gestion des flux visiteurs (Visitor Flow Management Process, VFMP) pour les orienter vers les zones où ils sont le moins nombreux. Il s’agira alors de combiner les données sur l’affluence en temps réel pour chaque espace avec les souhaits et les goûts des visiteurs pour suggérer le parcours personnalisé idéal

      Argument en faveur de l'IA qui permet bien de gérer le flux mais ajoute un second bénéfice : proposer un parcours idéal. Ce bénéfice supplémentaire peut être considéré comme un argument réthorique de type Logos.

    2. Certaines technologies intelligentes utilisées dans d’autres secteurs pourraient être transposées dans les musées. Avec le big data, il est possible de connaître l’affluence en fonction des dates et des horaires, les types de visiteurs selon les jours et les périodes, ou la durée de visite moyenne par rapport différents paramètres comme la météo.

      Argument épistémique inductif et réthorique de type logos.

      On passe à l'intelligence artificielle, technologie de pointe. Apporte du crédit à l'affirmation du bénéfice du numérique.

  16. Feb 2020
    1. visual are processed 60,000 times faster in the brain than text and visual aids in the classroom improve learning up to 400 percent. Ideas presented graphically are easier to understand and remember than those presented as words, (Kliegel et al., 1987).

      throw out this factoid when doing video?

  17. Dec 2019
    1. Ranking the intelligence of animals seems an increasingly pointless exercise when one considers the really important thing: how well that animal is adapted to its niche
    1. “NextNow Collaboratory is an interesting example of a new kind of collective intelligence: an Internet-enabled, portable social network, easily transferable from one social cause to another.”

      Sense Collective's TotemSDK brings together tools, protocols, platform integrations and best practices for extending collective intelligence beyond our current capabilities. A number of cryptographic primitives have emerged which support the amazing work of projects like the NextNow Collaboratory in exciting ways that help to upgrade the general purpose social computing substrate which make tools like hypothes.is so valuable.

    1. A natural language provides its user with a ready-made structure of concepts that establishes a basic mental structure, and that allows relatively flexible, general-purpose concept structuring. Our concept of language as one of the basic means for augmenting the human intellect embraces all of the concept structuring which the human may make use of.
    2. It has been jokingly suggested several times during the course of this study that what we are seeking is an "intelligence amplifier." (The term is attributed originally to W. Ross Ashby[2,3]. At first this term was rejected on the grounds that in our view one's only hope was to make a better match between existing human intelligence and the problems to be tackled, rather than in making man more intelligent. But deriving the concepts brought out in the preceding section has shown us that indeed this term does seem applicable to our objective. 2c2a Accepting the term "intelligence amplification" does not imply any attempt to increase native human intelligence. The term "intelligence amplification" seems applicable to our goal of augmenting the human intellect in that the entity to be produced will exhibit more of what can be called intelligence than an unaided human could; we will have amplified the intelligence of the human by organizing his intellectual capabilities into higher levels of synergistic structuring. What possesses the amplified intelligence is the resulting H-LAM/T system, in which the LAM/T augmentation means represent the amplifier of the human's intelligence.2c2b In amplifying our intelligence, we are applying the principle of synergistic structuring that was followed by natural evolution in developing the basic human capabilities. What we have done in the development of our augmentation means is to construct a superstructure that is a synthetic extension of the natural structure upon which it is built. In a very real sense, as represented by the steady evolution of our augmentation means, the development of "artificial intelligence" has been going on for centuries.
    1. This is not a new idea. It is based on the vision expounded by Vannevar Bush in his 1945 essay “As We May Think,” which conjured up a “memex” machine that would remember and connect information for us mere mortals. The concept was refined in the early 1960s by the Internet pioneer J. C. R. Licklider, who wrote a paper titled “Man-Computer Symbiosis,” and the computer designer Douglas Engelbart, who wrote “Augmenting Human Intellect.” They often found themselves in opposition to their colleagues, like Marvin Minsky and John McCarthy, who stressed the goal of pursuing artificial intelligence machines that left humans out of the loop.

      Seymour Papert, had an approach that provides a nice synthesis between these two camps, buy leveraging early childhood development to provide insights on the creation of AI.

    2. Thompson’s point is that “artificial intelligence” — defined as machines that can think on their own just like or better than humans — is not yet (and may never be) as powerful as “intelligence amplification,” the symbiotic smarts that occur when human cognition is augmented by a close interaction with computers.

      Intelligence amplification over artificial intelligence. In reality you can't get to AI until you've mastered IA.

    1. lants speak in a chemical vocabulary we can’t directly perceive or comprehend. The first important discoveries in plant communication were made in the lab in the nineteen-eighties, by isolating plants and their chemical emissions in Plexiglas chambers, but Rick Karban, the U.C. Davis ecologist, and others have set themselves the messier task of studying how plants exchange chemical signals outdoors, in a natural setting.
    1. Alexander Samuel reflects on tagging and its origins as a backbone to the social web. Along with RSS, tags allowed users to connect and collate content using such tools as feed readers. This all changed with the advent of social media and the algorithmically curated news feed.

      Tags were used for discovery of specific types of content. Who needs that now that our new overlords of artificial intelligence and algorithmic feeds can tell us what we want to see?!

      Of course we still need tags!!! How are you going to know serendipitously that you need more poetry in your life until you run into the tag on a service like IndieWeb.xyz? An algorithmic feed is unlikely to notice--or at least in my decade of living with them I've yet to run into poetry in one.

  18. Nov 2019
    1. A multimedia approach to affective learning and training can result in more life-like trainings which replicate scenarios and thus provide more targeted feedback, interventions, and experience to improve decision making and outcomes. Rating: 7/10

    1. An emotional intelligence course initiated by Google became a tool to improve mindfulness, productivity, and emotional IQ. The course has since expanded into other businesses which report that employees are coping better with stressors and challenges. Rating: 7/10 Key questions...what is the format of the course, tools etc?

  19. Sep 2019
    1. The idea of a “plant intelligence”—an intelligence that goes beyond adaptation and reaction and into the realm of active memory and decision-making—has been in the air since at least the early seventies.

      what is intelligence after all?

    2. “Trees do not have will or intention. They solve problems, but it’s all under hormonal control, and it all evolved through natural selection.”

      is having will or intention akin to having intelligence?

  20. Aug 2019
    1. so there won’t be a blinking bunny, at least not yet, let’s train our bunny to blink on command by mixing stimuli ( the tone and the air puff)

      Is it just that how we all learn and evolve? 😲

    1. HTM and SDR's - part of how the brain implements intelligence.

      "In this first introductory episode of HTM School, Matt Taylor, Numenta's Open Source Flag-Bearer, walks you through the high-level theory of Hierarchical Temporal Memory in less than 15 minutes."

    1. A notable by-product of a move of clinical as well as research data to the cloud would be the erosion of market power of EMR providers.

      But we have to be careful not to inadvertently favour the big tech companies in trying to stop favouring the big EMR providers.

    2. cloud computing is provided by a small number of large technology companies who have both significant market power and strong commercial interests outside of healthcare for which healthcare data might potentially be beneficial

      AI is controlled by these external forces. In what direction will this lead it?

    3. it has long been argued that patients themselves should be the owners and guardians of their health data and subsequently consent to their data being used to develop AI solutions.

      Mere consent isn't enough. We consent to give away all sorts of data for phone apps that we don't even really consider. We need much stronger awareness, or better defaults so that people aren't sharing things without proper consideration.

    4. To realize this vision and to realize the potential of AI across health systems, more fundamental issues have to be addressed: who owns health data, who is responsible for it, and who can use it? Cloud computing alone will not answer these questions—public discourse and policy intervention will be needed.

      This is part of the habit and culture of data use. And it's very different in health than in other sectors, given the sensitivity of the data, among other things.

    5. In spite of the widely touted benefits of “data liberation”,15 a sufficiently compelling use case has not been presented to overcome the vested interests maintaining the status quo and justify the significant upfront investment necessary to build data infrastructure.

      Advancing AI requires more than just AI stuff. It requires infrastructure and changes in human habit and culture.

    6. However, clinician satisfaction with EMRs remains low, resulting in variable completeness and quality of data entry, and interoperability between different providers remains elusive.11

      Another issue with complex systems: the data can be volumous but poor individual quality, relying on domain knowledge to be able to properly interpret (eg. that doctor didn't really prescribe 10x the recommended dose. It was probably an error.).

    7. Second, most healthcare organizations lack the data infrastructure required to collect the data needed to optimally train algorithms to (a) “fit” the local population and/or the local practice patterns, a requirement prior to deployment that is rarely highlighted by current AI publications, and (b) interrogate them for bias to guarantee that the algorithms perform consistently across patient cohorts, especially those who may not have been adequately represented in the training cohort.9

      AI depends on:

      • static processes - if the population you are predicting changes relative to the one used to train the model, all bets are off. It remains to be seen how similar they need to be given the brittleness of AI algorithms.
      • homogeneous population - beyond race, what else is important? If we don't have a good theory of health, we don't know.
    8. Simply adding AI applications to a fragmented system will not create sustainable change.
    1. Both artists, through annotation, have produced new forms of public dialogue in response to other people (like Harvey Weinstein), texts (The New York Times), and ideas (sexual assault and racial bias) that are of broad social and political consequence.

      What about examples of future sorts of annotations/redactions like these with emerging technologies? Stories about deepfakes (like Obama calling Trump a "dipshit" or the Youtube Channel Bad Lip Reading redubbing the words of Senator Ted Cruz) are becoming more prevalent and these are versions of this sort of redaction taken to greater lengths. At present, these examples are obviously fake and facetious, but in short order they will be indistinguishable and more commonplace.

  21. Jul 2019
  22. Jun 2019
    1. The term first appeared in 1984 as the topic of a public debate at the annual meeting of AAAI (then called the "American Association of Artificial Intelligence"). It is a chain reaction that begins with pessimism in the AI community, followed by pessimism in the press, followed by a severe cutback in funding, followed by the end of serious research.[2] At the meeting, Roger Schank and Marvin Minsky—two leading AI researchers who had survived the "winter" of the 1970s—warned the business community that enthusiasm for AI had spiraled out of control in the 1980s and that disappointment would certainly follow. Three years later, the billion-dollar AI industry began to collapse.
  23. May 2019
    1. Deepmachinelearning,whichisusingalgorithmstoreplicatehumanthinking,ispredicatedonspecificvaluesfromspecifickindsofpeople—namely,themostpowerfulinstitutionsinsocietyandthosewhocontrolthem.

      This reminds me of this Reddit page

      The page takes pictures and texts from other Reddit pages and uses it to create computer generated posts and comments. It is interesting to see the intelligence and quality of understanding grow as it gathers more and more information.

    1. government investments
    2. initiatives from the U.S., China, and Europ
    3. Recent Government Initiatives
    4. engagement in AI activities by academics, corporations, entrepreneurs, and the general public

      Volume of Activity

    5. Derivative Measures
    6. AI Vibrancy Index
    7. limited gender diversity in the classroom
    8. improvement in natural language
    9. the COCO leaderboard
    10. patents
    11. robot operating system downloads,
    12. he GLUE metric
    13. robot installations
    14. AI conference attendance
    15. the speed at which computers can be trained to detect objects

      Technical Performance

    16. quality of question answering

      Technical Performance

    17. changes in AI performance

      Technical Performance

    18. Technical Performance
    19. number of undergraduates studying AI

      Volume of Activity

    20. growth in venture capital funding of AI startups

      Volume of Activity

    21. percent of female applicants for AI jobs

      Volume of Activity

    22. Volume of Activity
    23. increased participation in organizations like AI4ALL and Women in Machine Learning
    24. producers of AI patents
    25. ML teaching events
    26. University course enrollment
    27. 83 percent of 2017 AI papers
    1. Methodology The classic OSINT methodology you will find everywhere is strait-forward: Define requirements: What are you looking for? Retrieve data Analyze the information gathered Pivoting & Reporting: Either define new requirements by pivoting on data just gathered or end the investigation and write the report.

      Etienne's blog! Amazing resource for OSINT; particularly focused on technical attacks.

    1. There’s a bug in the evolutionary code that makes up our brains.

      Saying it's a "bug" implies that it's bad. But something this significant likely improves our evolutionary fitness in the past. This "bug" is more of a previously-useful adaptation. Whether it's still useful or not is another question, but it might be.

  24. Apr 2019
    1. Ashley Norris is the Chief Academic Officer at ProctorU, an organization that provides online exam proctoring for schools. This article has an interesting overview of the negative side of technology advancements and what that has meant for student's ability to cheat. While the article does culminate as an ad, of sorts, for ProctorU, it is an interesting read and sparks thoughts on ProctorU's use of both human monitors for testing but also their integration of Artificial Intelligence into the process.

      Rating: 9/10.

  25. Mar 2019
    1. If you do not like the price you’re being offered when you shop, do not take it personally: many of the prices we see online are being set by algorithms that respond to demand and may also try to guess your personal willingness to pay. What’s next? A logical next step is that computers will start conspiring against us. That may sound paranoid, but a new study by four economists at the University of Bologna shows how this can happen.
    1. Worse still, even if we had the ability to take a snapshot of all of the brain’s 86 billion neurons and then to simulate the state of those neurons in a computer, that vast pattern would mean nothing outside the body of the brain that produced it. This is perhaps the most egregious way in which the IP metaphor has distorted our thinking about human functioning.

      Again, this doesn't conflict with a machine-learning or deep-learning or neural-net way of seeing IP.

    2. No ‘copy’ of the story is ever made

      Or, the copy initially made is changed over time since human "memory" is interdependent and interactive with other brain changes, whereas each bit in computer memory is independent of all other bits.

      However, machine learning probably results in interactions between bits as the learning algorithm is exposed to more training data. The values in a deep neural network interact in ways that are not so obvious. So this machine-human analogy might be getting new life with machine learning.

    3. The IP perspective requires the player to formulate an estimate of various initial conditions of the ball’s flight

      I don't see how this is true. The IP perspective depends on algorithms. There are many different algorithms to perform various tasks. Some perform reverse-kinematic calculations, but others conduct simpler, repeated steps. In computer science, this might be dynamic programming, recursive algorithms, or optimization. It seems that the IP metaphor still fits: it's just that those using the metaphor may not have updated their model of IP to be more modern.

    1. There is no wonder that AI gains popularity. A lot of facts and pros are the stimulators of such profitable growth of AI. The essential peculiarities are fully presented in the given article.

    1. You were beginning to gather that there were other symbols mixed with the words that might be part of a sentence, and that the different parts of what made a full-thought statement (your feeling about what a sentence is) were not just laid out end to end as you expected.

      This suggests that Joe is doing something almost completely unrecognizable--with language at least. I guess my assumption is that I would know what Joe was doing he'd just be doing it so quickly I wouldn't be able to follow. And he'd complete the task--a task I recognize--far more quickly than I possibly could using comparable analog technologies. Perhaps this is me saying, I buy Englebart's augmentation idea on the level of efficiency but remain skeptical or at least have yet to realize it's transformative effect on intellect itself.

    2. we provide him as much help as possible in making a plan of action. Then we give him as much help as we can in carrying it out. But we also have to allow him to change his mind at almost any point, and to want to modify his plans.

      I'm thinking about the role of AI tutors/advisors here. How often do they operate in the kind of flexible way described here. I wonder if they can without actual human intervention.

  26. Feb 2019
    1. In amplifying our intelligence, we are applying the principle of synergistic structuring that was followed by natural evolution in developing the basic human capabilities. What we have done in the development of our augmentation means is to construct a superstructure that is a synthetic extension of the natural structure upon which it is built. In a very real sense, as represented by the steady evolution of our augmentation means, the development of "artificial intelligence" has been going on for centuries.

      Engelbart explicitly noted that what he was trying to do was not just hack culture, which is what significant innovations accomplish, but to hack the process by which biological and cultural co-evolution has bootstrapped itself to this point. Culture used the capabilities provided by biological evolution -- language, thumbs, etc. -- to improve human ways of living much faster than biological evolution can do, by not just inventing, but passing along to each other and future generations the knowledge of what was invented and how to invent. Engelbart proposes an audio-visual-tactile interface to computing as a tool for consciously accelerating the scope and power of individual and collective intelligence.

    2. Our culture has evolved means for us to organize the little things we can do with our basic capabilities so that we can derive comprehension from truly complex situations, and accomplish the processes of deriving and implementing problem solutions. The ways in which human capabilities are thus extended are here called augmentation means, and we define four basic classes of them: 2a4 Artifacts—physical objects designed to provide for human comfort, for the manipulation of things or materials, and for the manipulation of symbols.2a4a Language—the way in which the individual parcels out the picture of his world into the concepts that his mind uses to model that world, and the symbols that he attaches to those concepts and uses in consciously manipulating the concepts ("thinking"). 2a4b Methodology—the methods, procedures, strategies, etc., with which an individual organizes his goal-centered (problem-solving) activity. 2a4c Training—the conditioning needed by the human being to bring his skills in using Means 1, 2, and 3 to the point where they are operationally effective. 2a4d The system we want to improve can thus be visualized as a trained human being together with his artifacts, language, and methodology. The explicit new system we contemplate will involve as artifacts computers, and computer-controlled information-storage, information-handling, and information-display devices. The aspects of the conceptual framework that are discussed here are primarily those relating to the human being's ability to make significant use of such equipment in an integrated system.

      To me, this is the most prescient of Engelbart's future visions, and the seed for future study of culture-technology co-evolution. I talked with Engelbart about this passage over the years and we agreed that although the power of the artifacts, from RAM to CPU speed to network bandwidth, had improved by the billionfold since 1962, the "softer" parts of the formula -- the language, methodology, and training -- have not advanced so much. Certainly language, training methods and pedagogy, and collaborative strategies have evolved with the growth and spread of digital media, but are still lagging. H/LAMT interests me even more today than it did thirty years ago because Engelbart unknowingly forecast the fundamental elements of what has come to be called cultural-biological co-evolution. I gave a TED talk in 2005, calling for an interdisciplinary study of human cooperation -- and obstacles to cooperation. It seems that in recent years an interdisciplinary understanding has begun to emerge. Joseph Henrich at Harvard, for one, in his recent book, The Secret of Our Success, noted:

      Drawing insights from lost European Explorers, clever chimpanzees, hunter-gatherers, cultural neuroscience, ancient bones and the human genome, Henrich shows that it’s not our general intelligence, innate brain power, or specialized mental abilities that explain our success. Instead, it’s our collective brains, which arise from a combination of our ability to learn selectively from each and our sociality. Our collective brains, which often operate outside of any individual’s conscious awareness, gradually produce increasingly complex, nuanced and subtle technological, linguistic and social products over generations.

      Tracking this back into the mist of our evolutionary past, and to the remote corners of the globe, Henrich shows how this non-genetic system of cultural inheritance has long driven human genetic evolution. By producing fire, cooking, water containers, tracking know-how, plant knowledge, words, hunting strategies and projectiles, culture-driven genetic evolution expanded our brains, shaped our anatomy and physiology, and influenced our psychology, making us into the world’s only living cultural species. Only by understanding cultural evolution, can we understand human genetic evolution.

      Henrich, Boyd, and RIcherson wrote, about the social fundamentals that distinguish human culture's methods of evolving collective intelligence in The Origin and Evolution of Culture:

      Surely, without punishment, language, technology, individual intelligence and inventiveness, ready establishment of reciprocal arrangements, prestige systems and solutions to games of coordination, our societies would take on a distinctly different cast. Thus, a major constraint on explanations of human sociality is its systemic structure

    1. Nearly half of FBI rap sheets failed to include information on the outcome of a case after an arrest—for example, whether a charge was dismissed or otherwise disposed of without a conviction, or if a record was expunged

      This explains my personal experience here: https://hyp.is/EIfMfivUEem7SFcAiWxUpA/epic.org/privacy/global_entry/default.html (Why someone who had Global Entry was flagged for a police incident before he applied for Global Entry).

    2. Applicants also agree to have their fingerprints entered into DHS’ Automatic Biometric Identification System (IDENT) “for recurrent immigration, law enforcement, and intelligence checks, including checks against latent prints associated with unsolved crimes.

      Intelligence checks is very concerning here as it suggests pretty much what has already been leaked, that the US is running complex autonomous screening of all of this data all the time. This also opens up the possibility for discriminatory algorithms since most of these are probably rooted in machine learning techniques and the criminal justice system in the US today tends to be fairly biased towards certain groups of people to begin with.

    3. It cited research, including some authored by the FBI, indicating that “some of the biometrics at the core of NGI, like facial recognition, may misidentify African Americans, young people, and women at higher rates than whites, older people, and men, respectively.

      This re-affirms the previous annotation that the set of training data for the intelligence checks the US runs on global entry data is biased towards certain groups of people.

  27. Jan 2019
    1. machine intelligence

      Interestingly enough, we saw it coming. All the advances in technology that lead to this much efficiency in technology, were not to be taken lightly. A few decades ago (about 35 years, since the invention of the internet and online networks in 1983) people probably saw the internet as a gift from heavens - one with little or any downsides to it. But now, as it has advanced to such an extreme. with advanced machines engineering, we have learned otherwise. The hacking of sites and networks, viruses and malware, user data surveillance and monitoring, are only a few of the downsides to such heavenly creation. And now, we face the truth: machine intelligence is not to be underestimated! Or the impact on our lives could be negative in years to come. This is because it will only get more intense with the years, as technology further develops.

    1. AI Robots will be replacing the White Collar Jobs by 6% until 2021

      AI software and the chatbots will be included in the current technologies and have automated with the robotic system. They will have given rights to access calendars, email accounts, browsing history, playlists, past purchases, and media viewing history. 6% is the huge number in the world as people would be seen struggling in finding the jobs. But there are benefits also as your work would have done easily and speedily

    1. With approximately half of variance in cognitive task performance having non-g sources of variance, we believe that other traits may be important in explaining cognitive performance of both non-Western and Western groups.

      Another important implication of the study.

    2. Although we believe that this study establishes the presence of g in data from these non-Western cultures, this study says nothing about the relative level of general cognitive ability in various societies, nor can it be used to make cross-cultural comparisons. For this purpose, one must establish measurement invariance of a test across different cultural groups (e.g., Holding et al., 2018) to ensure that test items and tasks function in a similar way for each group.

      This is absolutely essential to understanding the implications of the article.

    3. Two peer reviewers raised the possibility that developmental differences across age groups could be a confounding variable because a g factor may be weaker in children than adults.

      Colom also suggested this (see link above). The fact that three people independently had this concern that age could be a moderator variable is telling. I'm glad the peer reviewers had us do this post hoc analysis.

    4. some of these data sets were collected by individuals who are skeptical of the existence or primacy of g in general or in non-Western cultures (e.g., Hashmi et al., 2010; Hashmi, Tirmizi, Shah, & Khan, 2011; O’Donnell et al., 2012; Pitchford & Outhwaite, 2016; Stemler et al., 2009; Sternberg et al., 2001, 2002). One would think that these investigators would be most likely to include variables in their data sets that would form an additional factor. Yet, with only three ambiguous exceptions (Grigorenko et al., 2006; Gurven et al., 2017), these researchers’ data still produced g.

      This is particularly strong evidence for me. If g doesn't exist, these researchers would be the most likely ones to gather data to show that.

    5. the strongest first factor accounted for 86.3% of observed variable variance

      I suspect that this factor was so strong because it consisted of only four observed variables, and three of them were written measures of verbal content. All of the verbal cariables correlated r = .72 to .89. Even the "non-verbal" variable (numerical ability) correlates r = .72 to .81 with the other three variables (Rehna & Hanif, 2017, p. 25). Given these strong correlations, a very strong first factor is almost inevitable.

    6. The weakest first factor accounted for 18.3% of variance

      This factor may be weak because the sample consists of Sudanese gifted children, which may have restricted the range of correlations in the dataset.

    7. The mean sample size of the remaining data sets was 539.6 (SD = 1,574.5). The large standard deviation in relationship to the mean is indicative of the noticeably positively skewed distribution of sample sizes, a finding supported by the much smaller median of 170 and skewness value of 6.297. There were 16,559 females (33.1%), 25,431 males (48.6%), and 10,350 individuals whose gender was unreported (19.8%). The majority of samples—62 of 97 samples (63.9%)—consisted entirely or predominantly of individuals below 18. Most of the remaining samples contained entirely or predominantly adults (32 data sets, 33.0%), and the remaining 3 datasets (3.1%) had an unknown age range or an unknown mix of adults and children). The samples span nearly the entire range of life span development, from age 2 to elderly individuals.

      My colleague, Roberto Colom, stated in his blog (link below) that he would have discarded samples with fewer than 100 individuals. This is a legitimate analysis decision. See his other commentary (in Spanish) at https://robertocolom.wordpress.com/2018/06/01/la-universalidad-del-factor-general-de-inteligencia-g/

    8. Alternatively, one could postulate that a general cognitive ability is a Western trait but not a universal trait among humans, but this would require an evolutionary model where this general ability evolved several times independently throughout the mammalian clade, including separately in the ancestors of Europeans after they migrated out of Africa and separated from other human groups. Such a model requires (a) a great deal of convergent evolution to occur across species occupying widely divergent environmental niches and (b) an incredibly rapid development of a general cognitive ability while the ancestors of Europeans were under extremely strong selection pressures that other humans did not experience (but other mammal species or their ancestors would have experienced at other times). We find the more parsimonious model of an evolutionary origin of the general cognitive ability in the early stages of mammalian development to be the more plausible one, and thus we believe that it is reasonable to expect a general cognitive ability to be a universal human trait.

      It was this reasoning that led to the decision to conduct this study. There is mounting evidence that g exists in other mammalian species, and it definitely exists in Western cultures. It seemed really unlikely that it would not exist in non-Western groups. But I couldn't find any data about the issue. So, time to do a study!

    9. Researchers studying the cognitive abilities of animals have identified a general factor in cognitive data from many mammal species

      Also, cats:

      Warren, J. M. (1961). Individual differences in discrimination learning by cats. The Journal of Genetic Psychology, 98, 89-93. doi:10.1080/00221325.1961.10534356

    10. Investigating cultural beliefs about intelligence may be mildly interesting from an anthropological perspective, but it sheds little light on the nature of intelligence. One undiscussed methodological problem in many studies of cultural perspectives on intelligence is the reliance on surveys of laymen to determine what people in a given culture believe about intelligence. This methodology says little about the actual nature of intelligence.

      After the manuscript was accepted for publication and the proofs turned in, I re-discovered the following from Gottfredson (2003, p. 362): ". . . lay beliefs are interesting but their value for scientific theories of intelligence is limited to hypothesis generation. Even if the claim were true, then, it would provide no evidence for the truth of any intelligence theory . . ."

      Gottfredson, L. S. (2003). Dissecting practical intelligence theory: Its claims and evidence. Intelligence, 31, 343-397. doi:10.1016/S0160-2896(02)00085-5

    11. Panga Munthu test of intelligence

      To me, this is the way to create tests of intelligence for non-Western cultures: find skills and manifestations of intelligence that are culturally appropriate for a group of examinees and use those skills to tap g. Cross-cultural testing would require identifying skills that are valued or developed in both cultures.

    12. Berry (1986)

      John W. Berry is a cross-cultural psychologist whose work stretches back over 50 years. He takes the position (e.g., Berry, 1986) that definitions of intelligence are culturally-specific and are bound up with the skills cultures encourage and that the environment requires people to develop. Therefore, he does not see Western definitions as applying to most groups.

      After this study, my position is more nuanced approach. I agree with Berry that the manifestations of intelligence can vary from culture to culture, but that underneath these surface features is g in all humans.

    1. Curiosity Is as Important as Intelligence

      This one is a pretty bold statement to make, in general.

      Mike Johansson, at Rochester Institute of Technology, makes the case that curiosity is the key to enabling both Creative and Critical Thinking for better problem solving, in general.

      What are some of your ideas?

    1. We believe that members of the public likely learn some inaccurate information about intelligence in their psychology courses. The good news about this implication is that reducing the public’s mistaken beliefs about intelligence will not take a massive public education campaign or public relations blitz. Instead, improving the public’s understanding about intelligence starts in psychology’s own backyard with improving the content of undergraduate courses and textbooks.

      To me, this is the "take home" message of the article. I hope psychology educators do more to improve the accuracy of their lessons about intelligence. I also hope more programs add a course on the topic to their curriculum.

    2. This means that it is actually easier to measure intelligence than many other psychological constructs. Indeed, some individuals trying to measure other constructs have inadvertently created intelligence tests

      When I learned this, it blew my mind.

    3. many psychologists simply accept an operational definition of intelligence by spelling out the procedures they use to measure it. . . . Thus, by selecting items for an intelligence test, a psychologist is saying in a direct way, “This is what I mean by intelligence.” A test that measures memory, reasoning, and verbal fluency offers a very different definition of intelligence than one that measures strength of grip, shoe size, hunting skills, or the person’s best Candy Crush mobile game score. (p. 290)

      Ironically, there is research showing that video game performance is positively correlated with intelligence test scores (e.g., Angeles Quiroga et al., 2015; Foroughi, Serraino, Parasuraman, & Boehm-Davis, 2016).

      Not every inaccurate statement in the textbooks was as silly as this one. Readers would benefit from browsing Supplemental File 2, which

    4. Minnesota Transracial Adoption Study

      This is a study begun in the 1970s of African American, interracial, and other minority group children who had been adopted by White families in Minnesota. The 1976 results indicated large IQ boosts (about 12 points) for adopted African American children at age 6, compared to the average IQ for African Americans in general. However, the 1992 report shows that the advantage had faded to about 6 points when the children were aged 17 years. Generally, intelligence experts see this landmark study as supporting both "nature" and "nurture."

    5. the Stanford-Binet intelligence test

      Although the Stanford-Binet is historically important, the Wechsler family of intelligence tests have been more popular since the 1970s.

    6. Some readers will also be surprised to find that The Bell Curve is not as controversial as its reputation would lead one to believe (and most of the book is not about race at all).

      I wrote this sentence. Two coauthors, three peer reviewers, and an editor all read it multiple times. No one ever asked for it to be changed.

    7. Gardner’s multiple intelligences

      I have a Twitter moment that analyzes Gardner's book "Frames of Mind" and shows why this theory is poorly supported by empirical data. https://twitter.com/i/moments/1064036271847161857

    8. Most frequently this appeared in the form of a tacit acknowledgment that IQ test scores correlate with academic success, followed by a quick denial that the scores are important for anything else in life
    9. this study highlights the mismatch between scholarly consensus on intelligence and the beliefs of the general public

      Christian Jarrett of the The British Psychological Society found this as the main message of the article. Read his blog post at https://digest.bps.org.uk/2018/03/08/best-selling-introductory-psychology-books-give-a-misleading-view-of-intelligence/

    10. Judged solely by the number of factually inaccurate statements, the textbooks we examined were mostly accurate.

      A blog post by James Thompson (psychology professor emeritus at University College London) has a much more acerbic response to the study than this. See his blog post for a contrasting viewpoint: http://www.unz.com/jthompson/fear-and-loathing-in-psychology/

    11. We found that 79.3% of textbooks contained inaccurate statements and 79.3% had logical fallacies in their sections about intelligence.
    12. Gottfredson’s (1997a) mainstream statement on intelligence

      This article is a classic, and it is required reading in my undergraduate human intelligence course. If you only have time to read 1 article about intelligence, this should be it.

    1. CTP synthesizes critical reflection with technology production as a way of highlighting and altering unconsciously-held assumptions that are hindering progress in a technical field.

      Definition of critical technical practice.

      This approach is grounded in AI rather than HCI

      (verbatim from the paper) "CTP consists of the following moves:

      • identifying the core metaphors of the field

      • noticing what, when working with those metaphors, remains marginalized

      • inverting the dominant metaphors to bring that margin to the center

      • embodying the alternative as a new technology

  28. Nov 2018
    1. most importantly, however, when the group has real synergy, it will by far exceed the best individual performance. Synergy is best thought of as members of the same team feeding off one another in positive ways; as result the "whole" becomes better than "the sum of the parts". Collaboration can actually raise the "group IQ" – i.e. the sum total of the best talents of each member on the team.

      Synergy.