654 Matching Annotations
  1. May 2020
    1. Mei, X., Lee, H.-C., Diao, K., Huang, M., Lin, B., Liu, C., Xie, Z., Ma, Y., Robson, P. M., Chung, M., Bernheim, A., Mani, V., Calcagno, C., Li, K., Li, S., Shan, H., Lv, J., Zhao, T., Xia, J., … Yang, Y. (2020). Artificial intelligence for rapid identification of the coronavirus disease 2019 (COVID-19). MedRxiv, 2020.04.12.20062661. https://doi.org/10.1101/2020.04.12.20062661

    1. Shweta, F., Murugadoss, K., Awasthi, S., Venkatakrishnan, A., Puranik, A., Kang, M., Pickering, B. W., O’Horo, J. C., Bauer, P. R., Razonable, R. R., Vergidis, P., Temesgen, Z., Rizza, S., Mahmood, M., Wilson, W. R., Challener, D., Anand, P., Liebers, M., Doctor, Z., … Badley, A. D. (2020). Augmented Curation of Unstructured Clinical Notes from a Massive EHR System Reveals Specific Phenotypic Signature of Impending COVID-19 Diagnosis [Preprint]. Infectious Diseases (except HIV/AIDS). https://doi.org/10.1101/2020.04.19.20067660

  2. Apr 2020
    1. Abdulla, A., Wang, B., Qian, F., Kee, T., Blasiak, A., Ong, Y. H., Hooi, L., Parekh, F., Soriano, R., Olinger, G. G., Keppo, J., Hardesty, C. L., Chow, E. K., Ho, D., & Ding, X. (n.d.). Project IDentif.AI: Harnessing Artificial Intelligence to Rapidly Optimize Combination Therapy Development for Infectious Disease Intervention. Advanced Therapeutics, n/a(n/a), 2000034. https://doi.org/10.1002/adtp.202000034

    1. The world’s largest exhibitions organizer, London-based Informa plc, outlined on Thursday morning a series of emergency actions it’s taking to alleviate the impact of the COVID-19 pandemic on its events business, which drives nearly two-thirds of the company’s overall revenues. Noting that the effects have been “significantly deeper, more volatile and wide-reaching,” than was initially anticipated, the company says it’s temporarily suspending dividends, cutting executive pay and issuing new shares worth about 20% of its total existing capital in an effort to strengthen its balance sheet and reduce its approximately £2.4 billion ($2.9 billion) in debt to £1.4 billion ($1.7 billion). Further, Informa says it’s engaged in “constructive discussions” with its U.S.-based debt holders over a covenant waiver agreement.

      Informa Group, que posee editoriales como Taylor & Francis, de Informa Intelligent Division toma medidas en su sector de conferencias y eventos. Provee dos tercios de sus ingresos totales, 2.9 billion dólares. Emite acciones y para el mercado norteamericano acuerdos de deuda. Mientras la parte editorial que aporta un 35% de los ingresos se mantiene sin cambios y con pronósticos estables y sólidos. Stephen Carter CEO

    1. Le public acquiert ainsi une nouvelle fonction : celle d’instance critique auquel doit s’exposer le pouvoir.

      fonction de l’espace public: un appareil critique (la critique est productrice d’espace public).

      le pouvoir, pour maintenir sa légitimité, doit être exposé à la sphère publique et se montrer à lui avec transparence; il doit pouvoir être challengé; s’il ne résiste pas à la critique publique, il ne mérite pas d’être en place.

      la possibilité de challenger l’instance publique est comparable à la publication des protocoles de sécurité utilisées dans le domaine public (ex. SSL/TLS): la sécurité des contenus encryptés tirent justement leur robustesse du fait que leur algorithme est public; quiconque pourrait le challenger à tout moment, si bien qu’on s’assure d’en éliminer toutes les failles (et l’intelligence collective peut être mise à contribution, le cas échéant).

    1. Des applications de visites guidées intelligentes s’appuient sur un processus de gestion des flux visiteurs (Visitor Flow Management Process, VFMP) pour les orienter vers les zones où ils sont le moins nombreux. Il s’agira alors de combiner les données sur l’affluence en temps réel pour chaque espace avec les souhaits et les goûts des visiteurs pour suggérer le parcours personnalisé idéal

      Argument en faveur de l'IA qui permet bien de gérer le flux mais ajoute un second bénéfice : proposer un parcours idéal. Ce bénéfice supplémentaire peut être considéré comme un argument réthorique de type Logos.

    2. Certaines technologies intelligentes utilisées dans d’autres secteurs pourraient être transposées dans les musées. Avec le big data, il est possible de connaître l’affluence en fonction des dates et des horaires, les types de visiteurs selon les jours et les périodes, ou la durée de visite moyenne par rapport différents paramètres comme la météo.

      Argument épistémique inductif et réthorique de type logos.

      On passe à l'intelligence artificielle, technologie de pointe. Apporte du crédit à l'affirmation du bénéfice du numérique.

  3. Feb 2020
    1. visual are processed 60,000 times faster in the brain than text and visual aids in the classroom improve learning up to 400 percent. Ideas presented graphically are easier to understand and remember than those presented as words, (Kliegel et al., 1987).

      throw out this factoid when doing video?

  4. Dec 2019
    1. Ranking the intelligence of animals seems an increasingly pointless exercise when one considers the really important thing: how well that animal is adapted to its niche
    1. “NextNow Collaboratory is an interesting example of a new kind of collective intelligence: an Internet-enabled, portable social network, easily transferable from one social cause to another.”

      Sense Collective's TotemSDK brings together tools, protocols, platform integrations and best practices for extending collective intelligence beyond our current capabilities. A number of cryptographic primitives have emerged which support the amazing work of projects like the NextNow Collaboratory in exciting ways that help to upgrade the general purpose social computing substrate which make tools like hypothes.is so valuable.

    1. A natural language provides its user with a ready-made structure of concepts that establishes a basic mental structure, and that allows relatively flexible, general-purpose concept structuring. Our concept of language as one of the basic means for augmenting the human intellect embraces all of the concept structuring which the human may make use of.
    2. It has been jokingly suggested several times during the course of this study that what we are seeking is an "intelligence amplifier." (The term is attributed originally to W. Ross Ashby[2,3]. At first this term was rejected on the grounds that in our view one's only hope was to make a better match between existing human intelligence and the problems to be tackled, rather than in making man more intelligent. But deriving the concepts brought out in the preceding section has shown us that indeed this term does seem applicable to our objective. 2c2a Accepting the term "intelligence amplification" does not imply any attempt to increase native human intelligence. The term "intelligence amplification" seems applicable to our goal of augmenting the human intellect in that the entity to be produced will exhibit more of what can be called intelligence than an unaided human could; we will have amplified the intelligence of the human by organizing his intellectual capabilities into higher levels of synergistic structuring. What possesses the amplified intelligence is the resulting H-LAM/T system, in which the LAM/T augmentation means represent the amplifier of the human's intelligence.2c2b In amplifying our intelligence, we are applying the principle of synergistic structuring that was followed by natural evolution in developing the basic human capabilities. What we have done in the development of our augmentation means is to construct a superstructure that is a synthetic extension of the natural structure upon which it is built. In a very real sense, as represented by the steady evolution of our augmentation means, the development of "artificial intelligence" has been going on for centuries.
    1. This is not a new idea. It is based on the vision expounded by Vannevar Bush in his 1945 essay “As We May Think,” which conjured up a “memex” machine that would remember and connect information for us mere mortals. The concept was refined in the early 1960s by the Internet pioneer J. C. R. Licklider, who wrote a paper titled “Man-Computer Symbiosis,” and the computer designer Douglas Engelbart, who wrote “Augmenting Human Intellect.” They often found themselves in opposition to their colleagues, like Marvin Minsky and John McCarthy, who stressed the goal of pursuing artificial intelligence machines that left humans out of the loop.

      Seymour Papert, had an approach that provides a nice synthesis between these two camps, buy leveraging early childhood development to provide insights on the creation of AI.

    2. Thompson’s point is that “artificial intelligence” — defined as machines that can think on their own just like or better than humans — is not yet (and may never be) as powerful as “intelligence amplification,” the symbiotic smarts that occur when human cognition is augmented by a close interaction with computers.

      Intelligence amplification over artificial intelligence. In reality you can't get to AI until you've mastered IA.

    1. lants speak in a chemical vocabulary we can’t directly perceive or comprehend. The first important discoveries in plant communication were made in the lab in the nineteen-eighties, by isolating plants and their chemical emissions in Plexiglas chambers, but Rick Karban, the U.C. Davis ecologist, and others have set themselves the messier task of studying how plants exchange chemical signals outdoors, in a natural setting.
    1. Alexander Samuel reflects on tagging and its origins as a backbone to the social web. Along with RSS, tags allowed users to connect and collate content using such tools as feed readers. This all changed with the advent of social media and the algorithmically curated news feed.

      Tags were used for discovery of specific types of content. Who needs that now that our new overlords of artificial intelligence and algorithmic feeds can tell us what we want to see?!

      Of course we still need tags!!! How are you going to know serendipitously that you need more poetry in your life until you run into the tag on a service like IndieWeb.xyz? An algorithmic feed is unlikely to notice--or at least in my decade of living with them I've yet to run into poetry in one.

  5. Nov 2019
    1. A multimedia approach to affective learning and training can result in more life-like trainings which replicate scenarios and thus provide more targeted feedback, interventions, and experience to improve decision making and outcomes. Rating: 7/10

    1. An emotional intelligence course initiated by Google became a tool to improve mindfulness, productivity, and emotional IQ. The course has since expanded into other businesses which report that employees are coping better with stressors and challenges. Rating: 7/10 Key questions...what is the format of the course, tools etc?

  6. Sep 2019
    1. The idea of a “plant intelligence”—an intelligence that goes beyond adaptation and reaction and into the realm of active memory and decision-making—has been in the air since at least the early seventies.

      what is intelligence after all?

    2. “Trees do not have will or intention. They solve problems, but it’s all under hormonal control, and it all evolved through natural selection.”

      is having will or intention akin to having intelligence?

  7. Aug 2019
    1. so there won’t be a blinking bunny, at least not yet, let’s train our bunny to blink on command by mixing stimuli ( the tone and the air puff)

      Is it just that how we all learn and evolve? 😲

    1. HTM and SDR's - part of how the brain implements intelligence.

      "In this first introductory episode of HTM School, Matt Taylor, Numenta's Open Source Flag-Bearer, walks you through the high-level theory of Hierarchical Temporal Memory in less than 15 minutes."

    1. A notable by-product of a move of clinical as well as research data to the cloud would be the erosion of market power of EMR providers.

      But we have to be careful not to inadvertently favour the big tech companies in trying to stop favouring the big EMR providers.

    2. cloud computing is provided by a small number of large technology companies who have both significant market power and strong commercial interests outside of healthcare for which healthcare data might potentially be beneficial

      AI is controlled by these external forces. In what direction will this lead it?

    3. it has long been argued that patients themselves should be the owners and guardians of their health data and subsequently consent to their data being used to develop AI solutions.

      Mere consent isn't enough. We consent to give away all sorts of data for phone apps that we don't even really consider. We need much stronger awareness, or better defaults so that people aren't sharing things without proper consideration.

    4. To realize this vision and to realize the potential of AI across health systems, more fundamental issues have to be addressed: who owns health data, who is responsible for it, and who can use it? Cloud computing alone will not answer these questions—public discourse and policy intervention will be needed.

      This is part of the habit and culture of data use. And it's very different in health than in other sectors, given the sensitivity of the data, among other things.

    5. In spite of the widely touted benefits of “data liberation”,15 a sufficiently compelling use case has not been presented to overcome the vested interests maintaining the status quo and justify the significant upfront investment necessary to build data infrastructure.

      Advancing AI requires more than just AI stuff. It requires infrastructure and changes in human habit and culture.

    6. However, clinician satisfaction with EMRs remains low, resulting in variable completeness and quality of data entry, and interoperability between different providers remains elusive.11

      Another issue with complex systems: the data can be volumous but poor individual quality, relying on domain knowledge to be able to properly interpret (eg. that doctor didn't really prescribe 10x the recommended dose. It was probably an error.).

    7. Second, most healthcare organizations lack the data infrastructure required to collect the data needed to optimally train algorithms to (a) “fit” the local population and/or the local practice patterns, a requirement prior to deployment that is rarely highlighted by current AI publications, and (b) interrogate them for bias to guarantee that the algorithms perform consistently across patient cohorts, especially those who may not have been adequately represented in the training cohort.9

      AI depends on:

      • static processes - if the population you are predicting changes relative to the one used to train the model, all bets are off. It remains to be seen how similar they need to be given the brittleness of AI algorithms.
      • homogeneous population - beyond race, what else is important? If we don't have a good theory of health, we don't know.
    8. Simply adding AI applications to a fragmented system will not create sustainable change.
    1. Both artists, through annotation, have produced new forms of public dialogue in response to other people (like Harvey Weinstein), texts (The New York Times), and ideas (sexual assault and racial bias) that are of broad social and political consequence.

      What about examples of future sorts of annotations/redactions like these with emerging technologies? Stories about deepfakes (like Obama calling Trump a "dipshit" or the Youtube Channel Bad Lip Reading redubbing the words of Senator Ted Cruz) are becoming more prevalent and these are versions of this sort of redaction taken to greater lengths. At present, these examples are obviously fake and facetious, but in short order they will be indistinguishable and more commonplace.

  8. Jul 2019
  9. Jun 2019
    1. The term first appeared in 1984 as the topic of a public debate at the annual meeting of AAAI (then called the "American Association of Artificial Intelligence"). It is a chain reaction that begins with pessimism in the AI community, followed by pessimism in the press, followed by a severe cutback in funding, followed by the end of serious research.[2] At the meeting, Roger Schank and Marvin Minsky—two leading AI researchers who had survived the "winter" of the 1970s—warned the business community that enthusiasm for AI had spiraled out of control in the 1980s and that disappointment would certainly follow. Three years later, the billion-dollar AI industry began to collapse.
  10. May 2019
    1. Deepmachinelearning,whichisusingalgorithmstoreplicatehumanthinking,ispredicatedonspecificvaluesfromspecifickindsofpeople—namely,themostpowerfulinstitutionsinsocietyandthosewhocontrolthem.

      This reminds me of this Reddit page

      The page takes pictures and texts from other Reddit pages and uses it to create computer generated posts and comments. It is interesting to see the intelligence and quality of understanding grow as it gathers more and more information.

    1. government investments
    2. initiatives from the U.S., China, and Europ
    3. Recent Government Initiatives
    4. engagement in AI activities by academics, corporations, entrepreneurs, and the general public

      Volume of Activity

    5. Derivative Measures
    6. AI Vibrancy Index
    7. limited gender diversity in the classroom
    8. improvement in natural language
    9. the COCO leaderboard
    10. patents
    11. robot operating system downloads,
    12. he GLUE metric
    13. robot installations
    14. AI conference attendance
    15. the speed at which computers can be trained to detect objects

      Technical Performance

    16. quality of question answering

      Technical Performance

    17. changes in AI performance

      Technical Performance

    18. Technical Performance
    19. number of undergraduates studying AI

      Volume of Activity

    20. growth in venture capital funding of AI startups

      Volume of Activity

    21. percent of female applicants for AI jobs

      Volume of Activity

    22. Volume of Activity
    23. increased participation in organizations like AI4ALL and Women in Machine Learning
    24. producers of AI patents
    25. ML teaching events
    26. University course enrollment
    27. 83 percent of 2017 AI papers
    1. Methodology The classic OSINT methodology you will find everywhere is strait-forward: Define requirements: What are you looking for? Retrieve data Analyze the information gathered Pivoting & Reporting: Either define new requirements by pivoting on data just gathered or end the investigation and write the report.

      Etienne's blog! Amazing resource for OSINT; particularly focused on technical attacks.

    1. There’s a bug in the evolutionary code that makes up our brains.

      Saying it's a "bug" implies that it's bad. But something this significant likely improves our evolutionary fitness in the past. This "bug" is more of a previously-useful adaptation. Whether it's still useful or not is another question, but it might be.

  11. Apr 2019
    1. Ashley Norris is the Chief Academic Officer at ProctorU, an organization that provides online exam proctoring for schools. This article has an interesting overview of the negative side of technology advancements and what that has meant for student's ability to cheat. While the article does culminate as an ad, of sorts, for ProctorU, it is an interesting read and sparks thoughts on ProctorU's use of both human monitors for testing but also their integration of Artificial Intelligence into the process.

      Rating: 9/10.

  12. Mar 2019
    1. If you do not like the price you’re being offered when you shop, do not take it personally: many of the prices we see online are being set by algorithms that respond to demand and may also try to guess your personal willingness to pay. What’s next? A logical next step is that computers will start conspiring against us. That may sound paranoid, but a new study by four economists at the University of Bologna shows how this can happen.
    1. Worse still, even if we had the ability to take a snapshot of all of the brain’s 86 billion neurons and then to simulate the state of those neurons in a computer, that vast pattern would mean nothing outside the body of the brain that produced it. This is perhaps the most egregious way in which the IP metaphor has distorted our thinking about human functioning.

      Again, this doesn't conflict with a machine-learning or deep-learning or neural-net way of seeing IP.

    2. No ‘copy’ of the story is ever made

      Or, the copy initially made is changed over time since human "memory" is interdependent and interactive with other brain changes, whereas each bit in computer memory is independent of all other bits.

      However, machine learning probably results in interactions between bits as the learning algorithm is exposed to more training data. The values in a deep neural network interact in ways that are not so obvious. So this machine-human analogy might be getting new life with machine learning.

    3. The IP perspective requires the player to formulate an estimate of various initial conditions of the ball’s flight

      I don't see how this is true. The IP perspective depends on algorithms. There are many different algorithms to perform various tasks. Some perform reverse-kinematic calculations, but others conduct simpler, repeated steps. In computer science, this might be dynamic programming, recursive algorithms, or optimization. It seems that the IP metaphor still fits: it's just that those using the metaphor may not have updated their model of IP to be more modern.

    1. There is no wonder that AI gains popularity. A lot of facts and pros are the stimulators of such profitable growth of AI. The essential peculiarities are fully presented in the given article.

    1. You were beginning to gather that there were other symbols mixed with the words that might be part of a sentence, and that the different parts of what made a full-thought statement (your feeling about what a sentence is) were not just laid out end to end as you expected.

      This suggests that Joe is doing something almost completely unrecognizable--with language at least. I guess my assumption is that I would know what Joe was doing he'd just be doing it so quickly I wouldn't be able to follow. And he'd complete the task--a task I recognize--far more quickly than I possibly could using comparable analog technologies. Perhaps this is me saying, I buy Englebart's augmentation idea on the level of efficiency but remain skeptical or at least have yet to realize it's transformative effect on intellect itself.

    2. we provide him as much help as possible in making a plan of action. Then we give him as much help as we can in carrying it out. But we also have to allow him to change his mind at almost any point, and to want to modify his plans.

      I'm thinking about the role of AI tutors/advisors here. How often do they operate in the kind of flexible way described here. I wonder if they can without actual human intervention.

  13. Feb 2019
    1. In amplifying our intelligence, we are applying the principle of synergistic structuring that was followed by natural evolution in developing the basic human capabilities. What we have done in the development of our augmentation means is to construct a superstructure that is a synthetic extension of the natural structure upon which it is built. In a very real sense, as represented by the steady evolution of our augmentation means, the development of "artificial intelligence" has been going on for centuries.

      Engelbart explicitly noted that what he was trying to do was not just hack culture, which is what significant innovations accomplish, but to hack the process by which biological and cultural co-evolution has bootstrapped itself to this point. Culture used the capabilities provided by biological evolution -- language, thumbs, etc. -- to improve human ways of living much faster than biological evolution can do, by not just inventing, but passing along to each other and future generations the knowledge of what was invented and how to invent. Engelbart proposes an audio-visual-tactile interface to computing as a tool for consciously accelerating the scope and power of individual and collective intelligence.

    2. Our culture has evolved means for us to organize the little things we can do with our basic capabilities so that we can derive comprehension from truly complex situations, and accomplish the processes of deriving and implementing problem solutions. The ways in which human capabilities are thus extended are here called augmentation means, and we define four basic classes of them: 2a4 Artifacts—physical objects designed to provide for human comfort, for the manipulation of things or materials, and for the manipulation of symbols.2a4a Language—the way in which the individual parcels out the picture of his world into the concepts that his mind uses to model that world, and the symbols that he attaches to those concepts and uses in consciously manipulating the concepts ("thinking"). 2a4b Methodology—the methods, procedures, strategies, etc., with which an individual organizes his goal-centered (problem-solving) activity. 2a4c Training—the conditioning needed by the human being to bring his skills in using Means 1, 2, and 3 to the point where they are operationally effective. 2a4d The system we want to improve can thus be visualized as a trained human being together with his artifacts, language, and methodology. The explicit new system we contemplate will involve as artifacts computers, and computer-controlled information-storage, information-handling, and information-display devices. The aspects of the conceptual framework that are discussed here are primarily those relating to the human being's ability to make significant use of such equipment in an integrated system.

      To me, this is the most prescient of Engelbart's future visions, and the seed for future study of culture-technology co-evolution. I talked with Engelbart about this passage over the years and we agreed that although the power of the artifacts, from RAM to CPU speed to network bandwidth, had improved by the billionfold since 1962, the "softer" parts of the formula -- the language, methodology, and training -- have not advanced so much. Certainly language, training methods and pedagogy, and collaborative strategies have evolved with the growth and spread of digital media, but are still lagging. H/LAMT interests me even more today than it did thirty years ago because Engelbart unknowingly forecast the fundamental elements of what has come to be called cultural-biological co-evolution. I gave a TED talk in 2005, calling for an interdisciplinary study of human cooperation -- and obstacles to cooperation. It seems that in recent years an interdisciplinary understanding has begun to emerge. Joseph Henrich at Harvard, for one, in his recent book, The Secret of Our Success, noted:

      Drawing insights from lost European Explorers, clever chimpanzees, hunter-gatherers, cultural neuroscience, ancient bones and the human genome, Henrich shows that it’s not our general intelligence, innate brain power, or specialized mental abilities that explain our success. Instead, it’s our collective brains, which arise from a combination of our ability to learn selectively from each and our sociality. Our collective brains, which often operate outside of any individual’s conscious awareness, gradually produce increasingly complex, nuanced and subtle technological, linguistic and social products over generations.

      Tracking this back into the mist of our evolutionary past, and to the remote corners of the globe, Henrich shows how this non-genetic system of cultural inheritance has long driven human genetic evolution. By producing fire, cooking, water containers, tracking know-how, plant knowledge, words, hunting strategies and projectiles, culture-driven genetic evolution expanded our brains, shaped our anatomy and physiology, and influenced our psychology, making us into the world’s only living cultural species. Only by understanding cultural evolution, can we understand human genetic evolution.

      Henrich, Boyd, and RIcherson wrote, about the social fundamentals that distinguish human culture's methods of evolving collective intelligence in The Origin and Evolution of Culture:

      Surely, without punishment, language, technology, individual intelligence and inventiveness, ready establishment of reciprocal arrangements, prestige systems and solutions to games of coordination, our societies would take on a distinctly different cast. Thus, a major constraint on explanations of human sociality is its systemic structure

    1. Nearly half of FBI rap sheets failed to include information on the outcome of a case after an arrest—for example, whether a charge was dismissed or otherwise disposed of without a conviction, or if a record was expunged

      This explains my personal experience here: https://hyp.is/EIfMfivUEem7SFcAiWxUpA/epic.org/privacy/global_entry/default.html (Why someone who had Global Entry was flagged for a police incident before he applied for Global Entry).

    2. Applicants also agree to have their fingerprints entered into DHS’ Automatic Biometric Identification System (IDENT) “for recurrent immigration, law enforcement, and intelligence checks, including checks against latent prints associated with unsolved crimes.

      Intelligence checks is very concerning here as it suggests pretty much what has already been leaked, that the US is running complex autonomous screening of all of this data all the time. This also opens up the possibility for discriminatory algorithms since most of these are probably rooted in machine learning techniques and the criminal justice system in the US today tends to be fairly biased towards certain groups of people to begin with.

    3. It cited research, including some authored by the FBI, indicating that “some of the biometrics at the core of NGI, like facial recognition, may misidentify African Americans, young people, and women at higher rates than whites, older people, and men, respectively.

      This re-affirms the previous annotation that the set of training data for the intelligence checks the US runs on global entry data is biased towards certain groups of people.

  14. Jan 2019
    1. machine intelligence

      Interestingly enough, we saw it coming. All the advances in technology that lead to this much efficiency in technology, were not to be taken lightly. A few decades ago (about 35 years, since the invention of the internet and online networks in 1983) people probably saw the internet as a gift from heavens - one with little or any downsides to it. But now, as it has advanced to such an extreme. with advanced machines engineering, we have learned otherwise. The hacking of sites and networks, viruses and malware, user data surveillance and monitoring, are only a few of the downsides to such heavenly creation. And now, we face the truth: machine intelligence is not to be underestimated! Or the impact on our lives could be negative in years to come. This is because it will only get more intense with the years, as technology further develops.

    1. AI Robots will be replacing the White Collar Jobs by 6% until 2021

      AI software and the chatbots will be included in the current technologies and have automated with the robotic system. They will have given rights to access calendars, email accounts, browsing history, playlists, past purchases, and media viewing history. 6% is the huge number in the world as people would be seen struggling in finding the jobs. But there are benefits also as your work would have done easily and speedily

    1. With approximately half of variance in cognitive task performance having non-g sources of variance, we believe that other traits may be important in explaining cognitive performance of both non-Western and Western groups.

      Another important implication of the study.

    2. Although we believe that this study establishes the presence of g in data from these non-Western cultures, this study says nothing about the relative level of general cognitive ability in various societies, nor can it be used to make cross-cultural comparisons. For this purpose, one must establish measurement invariance of a test across different cultural groups (e.g., Holding et al., 2018) to ensure that test items and tasks function in a similar way for each group.

      This is absolutely essential to understanding the implications of the article.

    3. Two peer reviewers raised the possibility that developmental differences across age groups could be a confounding variable because a g factor may be weaker in children than adults.

      Colom also suggested this (see link above). The fact that three people independently had this concern that age could be a moderator variable is telling. I'm glad the peer reviewers had us do this post hoc analysis.

    4. some of these data sets were collected by individuals who are skeptical of the existence or primacy of g in general or in non-Western cultures (e.g., Hashmi et al., 2010; Hashmi, Tirmizi, Shah, & Khan, 2011; O’Donnell et al., 2012; Pitchford & Outhwaite, 2016; Stemler et al., 2009; Sternberg et al., 2001, 2002). One would think that these investigators would be most likely to include variables in their data sets that would form an additional factor. Yet, with only three ambiguous exceptions (Grigorenko et al., 2006; Gurven et al., 2017), these researchers’ data still produced g.

      This is particularly strong evidence for me. If g doesn't exist, these researchers would be the most likely ones to gather data to show that.

    5. the strongest first factor accounted for 86.3% of observed variable variance

      I suspect that this factor was so strong because it consisted of only four observed variables, and three of them were written measures of verbal content. All of the verbal cariables correlated r = .72 to .89. Even the "non-verbal" variable (numerical ability) correlates r = .72 to .81 with the other three variables (Rehna & Hanif, 2017, p. 25). Given these strong correlations, a very strong first factor is almost inevitable.

    6. The weakest first factor accounted for 18.3% of variance

      This factor may be weak because the sample consists of Sudanese gifted children, which may have restricted the range of correlations in the dataset.

    7. The mean sample size of the remaining data sets was 539.6 (SD = 1,574.5). The large standard deviation in relationship to the mean is indicative of the noticeably positively skewed distribution of sample sizes, a finding supported by the much smaller median of 170 and skewness value of 6.297. There were 16,559 females (33.1%), 25,431 males (48.6%), and 10,350 individuals whose gender was unreported (19.8%). The majority of samples—62 of 97 samples (63.9%)—consisted entirely or predominantly of individuals below 18. Most of the remaining samples contained entirely or predominantly adults (32 data sets, 33.0%), and the remaining 3 datasets (3.1%) had an unknown age range or an unknown mix of adults and children). The samples span nearly the entire range of life span development, from age 2 to elderly individuals.

      My colleague, Roberto Colom, stated in his blog (link below) that he would have discarded samples with fewer than 100 individuals. This is a legitimate analysis decision. See his other commentary (in Spanish) at https://robertocolom.wordpress.com/2018/06/01/la-universalidad-del-factor-general-de-inteligencia-g/

    8. Alternatively, one could postulate that a general cognitive ability is a Western trait but not a universal trait among humans, but this would require an evolutionary model where this general ability evolved several times independently throughout the mammalian clade, including separately in the ancestors of Europeans after they migrated out of Africa and separated from other human groups. Such a model requires (a) a great deal of convergent evolution to occur across species occupying widely divergent environmental niches and (b) an incredibly rapid development of a general cognitive ability while the ancestors of Europeans were under extremely strong selection pressures that other humans did not experience (but other mammal species or their ancestors would have experienced at other times). We find the more parsimonious model of an evolutionary origin of the general cognitive ability in the early stages of mammalian development to be the more plausible one, and thus we believe that it is reasonable to expect a general cognitive ability to be a universal human trait.

      It was this reasoning that led to the decision to conduct this study. There is mounting evidence that g exists in other mammalian species, and it definitely exists in Western cultures. It seemed really unlikely that it would not exist in non-Western groups. But I couldn't find any data about the issue. So, time to do a study!

    9. Researchers studying the cognitive abilities of animals have identified a general factor in cognitive data from many mammal species

      Also, cats:

      Warren, J. M. (1961). Individual differences in discrimination learning by cats. The Journal of Genetic Psychology, 98, 89-93. doi:10.1080/00221325.1961.10534356

    10. Investigating cultural beliefs about intelligence may be mildly interesting from an anthropological perspective, but it sheds little light on the nature of intelligence. One undiscussed methodological problem in many studies of cultural perspectives on intelligence is the reliance on surveys of laymen to determine what people in a given culture believe about intelligence. This methodology says little about the actual nature of intelligence.

      After the manuscript was accepted for publication and the proofs turned in, I re-discovered the following from Gottfredson (2003, p. 362): ". . . lay beliefs are interesting but their value for scientific theories of intelligence is limited to hypothesis generation. Even if the claim were true, then, it would provide no evidence for the truth of any intelligence theory . . ."

      Gottfredson, L. S. (2003). Dissecting practical intelligence theory: Its claims and evidence. Intelligence, 31, 343-397. doi:10.1016/S0160-2896(02)00085-5

    11. Panga Munthu test of intelligence

      To me, this is the way to create tests of intelligence for non-Western cultures: find skills and manifestations of intelligence that are culturally appropriate for a group of examinees and use those skills to tap g. Cross-cultural testing would require identifying skills that are valued or developed in both cultures.

    12. Berry (1986)

      John W. Berry is a cross-cultural psychologist whose work stretches back over 50 years. He takes the position (e.g., Berry, 1986) that definitions of intelligence are culturally-specific and are bound up with the skills cultures encourage and that the environment requires people to develop. Therefore, he does not see Western definitions as applying to most groups.

      After this study, my position is more nuanced approach. I agree with Berry that the manifestations of intelligence can vary from culture to culture, but that underneath these surface features is g in all humans.

    1. Curiosity Is as Important as Intelligence

      This one is a pretty bold statement to make, in general.

      Mike Johansson, at Rochester Institute of Technology, makes the case that curiosity is the key to enabling both Creative and Critical Thinking for better problem solving, in general.

      What are some of your ideas?

    1. We believe that members of the public likely learn some inaccurate information about intelligence in their psychology courses. The good news about this implication is that reducing the public’s mistaken beliefs about intelligence will not take a massive public education campaign or public relations blitz. Instead, improving the public’s understanding about intelligence starts in psychology’s own backyard with improving the content of undergraduate courses and textbooks.

      To me, this is the "take home" message of the article. I hope psychology educators do more to improve the accuracy of their lessons about intelligence. I also hope more programs add a course on the topic to their curriculum.

    2. This means that it is actually easier to measure intelligence than many other psychological constructs. Indeed, some individuals trying to measure other constructs have inadvertently created intelligence tests

      When I learned this, it blew my mind.

    3. many psychologists simply accept an operational definition of intelligence by spelling out the procedures they use to measure it. . . . Thus, by selecting items for an intelligence test, a psychologist is saying in a direct way, “This is what I mean by intelligence.” A test that measures memory, reasoning, and verbal fluency offers a very different definition of intelligence than one that measures strength of grip, shoe size, hunting skills, or the person’s best Candy Crush mobile game score. (p. 290)

      Ironically, there is research showing that video game performance is positively correlated with intelligence test scores (e.g., Angeles Quiroga et al., 2015; Foroughi, Serraino, Parasuraman, & Boehm-Davis, 2016).

      Not every inaccurate statement in the textbooks was as silly as this one. Readers would benefit from browsing Supplemental File 2, which

    4. Minnesota Transracial Adoption Study

      This is a study begun in the 1970s of African American, interracial, and other minority group children who had been adopted by White families in Minnesota. The 1976 results indicated large IQ boosts (about 12 points) for adopted African American children at age 6, compared to the average IQ for African Americans in general. However, the 1992 report shows that the advantage had faded to about 6 points when the children were aged 17 years. Generally, intelligence experts see this landmark study as supporting both "nature" and "nurture."

    5. the Stanford-Binet intelligence test

      Although the Stanford-Binet is historically important, the Wechsler family of intelligence tests have been more popular since the 1970s.

    6. Some readers will also be surprised to find that The Bell Curve is not as controversial as its reputation would lead one to believe (and most of the book is not about race at all).

      I wrote this sentence. Two coauthors, three peer reviewers, and an editor all read it multiple times. No one ever asked for it to be changed.

    7. Gardner’s multiple intelligences

      I have a Twitter moment that analyzes Gardner's book "Frames of Mind" and shows why this theory is poorly supported by empirical data. https://twitter.com/i/moments/1064036271847161857

    8. Most frequently this appeared in the form of a tacit acknowledgment that IQ test scores correlate with academic success, followed by a quick denial that the scores are important for anything else in life
    9. this study highlights the mismatch between scholarly consensus on intelligence and the beliefs of the general public

      Christian Jarrett of the The British Psychological Society found this as the main message of the article. Read his blog post at https://digest.bps.org.uk/2018/03/08/best-selling-introductory-psychology-books-give-a-misleading-view-of-intelligence/

    10. Judged solely by the number of factually inaccurate statements, the textbooks we examined were mostly accurate.

      A blog post by James Thompson (psychology professor emeritus at University College London) has a much more acerbic response to the study than this. See his blog post for a contrasting viewpoint: http://www.unz.com/jthompson/fear-and-loathing-in-psychology/

    11. We found that 79.3% of textbooks contained inaccurate statements and 79.3% had logical fallacies in their sections about intelligence.
    12. Gottfredson’s (1997a) mainstream statement on intelligence

      This article is a classic, and it is required reading in my undergraduate human intelligence course. If you only have time to read 1 article about intelligence, this should be it.

    1. CTP synthesizes critical reflection with technology production as a way of highlighting and altering unconsciously-held assumptions that are hindering progress in a technical field.

      Definition of critical technical practice.

      This approach is grounded in AI rather than HCI

      (verbatim from the paper) "CTP consists of the following moves:

      • identifying the core metaphors of the field

      • noticing what, when working with those metaphors, remains marginalized

      • inverting the dominant metaphors to bring that margin to the center

      • embodying the alternative as a new technology

  15. Nov 2018
    1. most importantly, however, when the group has real synergy, it will by far exceed the best individual performance. Synergy is best thought of as members of the same team feeding off one another in positive ways; as result the "whole" becomes better than "the sum of the parts". Collaboration can actually raise the "group IQ" – i.e. the sum total of the best talents of each member on the team.

      Synergy.

    1. In effective collaboration, all people involved use their emotional intelligence well to balance emotional needs with their thinking, build authentic relationships and make good quality decisions on behalf of the organisation. Whether working with others one-to-one, in small groups or large teams, there is exemplary communication with empathy that engages hearts and minds.  This occurs at all levels of the organisation.

      How emotional intelligence affects collaboration.

    1. t turns out emotional intelligence in a group setting accelerates the group's development. Team members need EI on an individual level. And when we work together with EI, it's fascinating to see what happens. Studies are finding that collaboration among those with high emotional intelligence creates outcomes that exceed the sum of their individual talents. Shared emotional intelligence not only improves work processes, it improves the work product!

      Emotional intelligence helps increase collaboration.

    1. Entscheidend ist, dass sie Herren des Verfahrens bleiben - und eine Vision für das neue Maschinenzeitalter entwickeln.

      Es sieht für mich nicht eigentlich so aus als wären wir jemals die "Herren des Verfahrens" gewesen. Und auch darum geht es ja bei Marx. Denke ich.

    1. By framing "genius" as something intrinsic rather than situational, we deny even the potential for achievement to a huge fraction of the population. As paleontologist Stephen Jay Gould wrote The Panda's Thumb, where he wrote, "I am, somehow, less interested in the weight and convolutions of Einstein’s brain than in the near certainty that people of equal talent have lived and died in cotton fields and sweatshops."
    2. The problem is far worse when used to generalize about groups, such as gender and especially race. When combined with the cultural belief that only the "brainy" are worthy of science training, it becomes a self-reinforcing cycle: only certain white men are inherently "smart enough", as decided primarily by other white men. You'll hear (and I'll bet cash money that someone will argue in the comments) that African-American underrepresentation in science is because they're not "smart" or "motivated" enough, not that black-majority school districts are often underfunded, lacking teachers, supplies, and other necessities for STEM prep — not to mention daily challenges to their authority and intelligence for those who do earn STEM degrees.
    3. To make matters worse, "intelligence" itself is weaponized by the status quo against people of color and white women. That's especially evident in the continuing battles over the interpretation of IQ test results.
    4. Science writer Kat Arney delved into this issue in detail in a recent column for the (UK) Royal Society of Chemistry. As she points out, the problems with the "brainy scientist" stereotype are manifold: that science is a meritocracy, and that non-scientists are somehow less valuable.
  16. Sep 2018
    1. No. It’s not you. You were different before. – I’m still the same person, Lin. – I wasn’t, when I was on it. I did things I would never do. – Those things saved your life. – But they weren’t me. – Yes, they were. No, the way it works… – I know how it works. I get it. I totally get it. You feel invincible.

      The rhetoric of this passage raises a very important question. Are the people who are taking this drug really themselves still? If this was just a thought enhancing drug then perhaps this would be the case, however it does more than just make the user hyper-intelligent. The fact that this drug changes people's attitudes and their personalities proves that these people aren't themselves. On the other hand a hyper-intelligence may not directly change the person, but may enable them because a higher intelligence could reasonably lead to a higher confidence and a higher rationale of thinking.

    1. Good has captured the essence of the runaway, but he does not pursue its most disturbing consequences. Any intelligent machine of the sort he describes would not be humankinds “tool” – any more than humans are the tools of rabbits, robins, or chimpanzees.

      If humanity were to create an ultra-intelligent computer then humans would be far surpassed. This part of the passage says that because our minds would be so simple compared to that of the machines at the point of the singularity that we would be little more than rabbits. This idea is astounding because our mental capabilities are vastly superior to those of rabbits or other animals and the idea that we could be so easily surpassed and so greatly surpassed is probably terrifying to many people. This is probably what makes the idea of a singularity such an abstract idea.

    2. And its very likely that IA is a much easier road to the achievement of superhumanity than pure AI. In humans, the hardest development problems have already been solved. Building up from within ourselves ought to be easier than figuring out what we really are and then building machines that are all of that.

      The authors of the text are proposing a radically different approach to the inevitable "singularity" event. They propose the research and development IA, or Intelligence Amplification, is developing computers with a symbiosis with humans. Noting that IA could be easier to develop than AI algorithms, since humanity had to probe what their true weaknesses and strengths are. In turn, developing an IA system that could cover humanities' weaknesses. This would summarily prevent an IA algorithm from getting over itself, which could potentially slow a point when we reach singularity.

    1. Into the Wormhole

      This scene, as with many others, represents a crucial point in the movie. I think this also has some connection of what was discussed on the first day of class in relations to singularity. One student suggested that singularity in an astronomical context seems to be a black hole that implodes on itself. Further discussed in a linguistics context, we described how singularity shares connections with being alone or unique and individual. In the "into the wormhole" scene in 2001: A Space Odyssey, all of humankind is erased except for David. Advancements in technology had reached its peak to a point where David literally enters a black hole or a worm hole, in which he lives the rest of his life alone and passes away quietly. This signifying mankind imploding on themselves and being reborn. His passage signifies the rebirth of humankind and presents the idea of the cycle of life, where technology is nonexistent and life begins again. There is a sense of reverse chronology in the movie as the ending scene is continued at the beginning of the movie where the monkeys demonstrate their journey towards intelligence once more.

  17. Aug 2018
    1. Capacity can also affect crisis potential through staffing decisions that affect the diversity of acts that are available. Enactment is labour-intensive, which means understaffing has serious effects.

      Diverse labor force is also a central principle of effective crowdsourcing and collective intelligence.

    1. "... groups of individuals doing things collectively that seem intelligent.” [41]

      Collective intelligence definition.

      Per the authors, "collective intelligence is a superset of social computing and crowdsourcing, because both are defined in terms of social behavior."

      Collective intelligence is differentiated from human computation because the latter doesn't require a group.

      It is differentiated from crowdsourcing because it doesn't require a public crowd and it can happen without an open call.

    1. hus it becomes possible to see how ques-tions around data use need to shift from asking what is in the data, to include discussions of how the data is structured, and how this structure codifies value systems and social practices, subject positions and forms of visibility and invisi-bility (and thus forms of surveillance), along with the very ideas of crisis, risk governance and preparedness. Practices around big data produce and perpetuate specific forms of social engagement as well as understandings of the areas affected and the people being served.

      How data structure influences value systems and social practices is a much-needed topic of inquiry.

    2. Big data is not just about knowing more. It could be – and should be – about knowing better or about changing what knowing means. It is an ethico- episteme-ontological- political matter. The ‘needle in the haystack’ metaphor conceals the fact that there is no such thing as one reality that can be revealed. But multiple, lived are made through mediations and human and technological assemblages. Refugees’ realities of intersecting intelligences are shaped by the ethico- episteme-ontological politics of big data.

      Big, sweeping statement that helps frame how big data could be better conceptualized as a complex, socially contextualized, temporal artifact.

    3. Burns (2015) builds on this to investigate how within digital humanitarianism discourses, big data produce and perform subjects ‘in need’ (individuals or com-munities affected by crises) and a humanitarian ‘saviour’ community that, in turn, seeks answers through big data

      I don't understand what Burns is arguing here. Who is he referring to claims that DHN is a "savior" or "the solution" to crisis response?

      "Big data should therefore be be conceptualized as a framing of what can be known about a humanitarian crisis, and how one is able to grasp that knowledge; in short, it is an epistemology. This epistemology privileges knowledges and knowledge- based practices originating in remote geographies and de- emphasizes the connections between multiple knowledges.... Put another way, this configuration obscures the funding, resource, and skills constraints causing imperfect humanitarian response, instead positing volunteered labor as ‘the solution.’ This subjectivity formation carves a space in which digital humanitarians are necessary for effective humanitarian activities." (Burns 2015: 9–10)

    4. Crises are often not a crisis of information. It is often not a lack of data or capacity to analyse it that prevents ‘us’ from pre-venting disasters or responding effectively. Risk management fails because there is a lack of a relational sense of responsibility. But this does not have to be the case. Technologies that are designed to support collaboration, such as what Jasanoff (2007) terms ‘technologies of humility’, can be better explored to find ways of framing data and correlations that elicit a greater sense of relational responsibility and commitment.

      Is it "a lack of relational sense of responsibility" in crisis response (state vs private sector vs public) or is it the wicked problem of power, class, social hierarchies, etc.?

      "... ways of framing data and correlations that elicit a greater sense of responsibility and commitment."

      That could have a temporal component to it to position urgency, timescape, horizon, etc.

    5. In some ways this constitutes the production of ‘liquid resilience’ – a deflection of risk to the individuals and communities affected which moves us from the idea of an all-powerful and knowing state to that of a ‘plethora of partial projects and initiatives that are seeking to harness ICTs in the service of better knowing and governing individuals and populations’ (Ruppert 2012: 118)

      This critique addresses surveillance state concerns about glue-ing datasets together to form a broader understanding of aggregate social behavior without the necessary constraints/warnings about social contexts and discontinuity between data.

      Skimmed the Ruppert paper, sadly doesn't engage with time and topologies.

    6. Indeed, as Chandler (2015: 9) also argues, crowdsourcing of big data does not equate to a democratisation of risk assessment or risk governance:

      Beyond this quote, Chandler (in engaging crisis/disaster scenarios) argues that Big Data may be more appropriately framed as community reflexive knowledge than causal knowledge. That's an interesting idea.

      *"Thus, It would be more useful to see Big Data as reflexive knowledge rather than as causal knowledge. Big Data cannot help explain global warming but it can enable individuals and household to measure their own energy consumption through the datafication of household objects and complex production and supply chains. Big Data thereby datafies or materialises an individual or community’s being in the world. This reflexive approach works to construct a pluralised and multiple world of self-organising and adaptive processes. The imaginary of Big Data is that the producers and consumers of knowledge and of governance would be indistinguishable; where both knowing and governing exist without external mediation, constituting a perfect harmonious and self-adapting system: often called ‘community resilience’. In this discourse, increasingly articulated by governments and policy-makers, knowledge of causal connections is no longer relevant as communities adapt to the real-time appearances of the world, without necessarily understanding them."

      "Rather than engaging in external understandings of causality in the world, Big Data works on changing social behaviour by enabling greater adaptive reflexivity. If, through Big Data, we could detect and manage our own biorhythms and know the effects of poor eating or a lack of exercise, we could monitor our own health and not need costly medical interventions. Equally, if vulnerable and marginal communities could ‘datafy’ their own modes of being and relationships to their environments they would be able to augment their coping capacities and resilience without disasters or crises occurring. In essence, the imaginary of Big Data resolves the essential problem of modernity and modernist epistemologies, the problem of unintended consequences or side-effects caused by unknown causation, through work on the datafication of the self in its relational-embeddedness.42 This is why disasters in current forms of resilience thinking are understood to be ‘transformative’: revealing the unintended consequences of social planning which prevented proper awareness and responsiveness. Disasters themselves become a form of ‘datafication’, revealing the existence of poor modes of self-governance."*

      Downloaded Chandler paper. Cites Meier quite a bit.

    7. ut Burns finds that humanitarian staff often describe the local communities and ‘crowds’ as the ‘eyes, ears and sensors’ of UN staff, which does not index a genuine collaborative relationship. He states: ‘In all these cases, the discourse talks of putting local people “in the driving seat” when in reality the direction of the journey has already been decided’ (Burns 2015: 48). Burns (2015: 42) also notes that this leads to a transformation of social responsibility into individual responsibility.Neoliberalism’s promotion of free market norms is therefore much more than the simple ideology of free market economics. It is a specific form of social rule that institutionalises a rationality of competition, enterprise indi-vidualised responsibility. Although the state ‘steps back’ and encourages the free conduct of individuals, this is achieved through active intervention into civil society and the opening up of new areas to the logic of private enter-prise and individual initiative. This is the logic behind the rise of resilience

      Burns criticism of humanitarian response as not truly collaborative and an abdication of the state's responsibility for social welfare to the private sector.

    8. The UNHCR has even called for the refugees themselves to also develop their own data solutions and ideas (see Palmer 2014) as a way to help build their ideologies into the data infrastructures and thus bring their prisms into view. This could create a richer situational awareness and a better ability to understand and deal with unfolding and future crises by supporting resilient communities through giving them the means of data producing and sharing

      Participatory-design and community-centered design could be very helpful in this regard but this argument seems overstated.

      Evokes concerns about "distant suffering" (see: Chouliaraki, 2008): Who gets to share? What community? Refugees are not homogeneous.

    9. Doing so switches the discourse from vulnerability, where there is a need for external protection mobilised from above to come in and rescue the refugees, to one of resilience, where self- sufficiency and autonomy are part of the equation (Meier 2013).

      The dichotomy between state-led response vs community-coordinated response as the only ways to deliver aid seems unnecessarily limited.

      It can be both and other models/new ideas.

      Conflict- and persecution-driven humanitarian needs are often rife with complexity and receive scant attention outside of the humanitarian INGO sector.

    10. Yet, at the same time as power is exercised by both the state and corporations, power is gathering from the bottom up in new ways. In disaster response, a dynamic interplay between publics and experts is captured by the concept of social collective intelligence (Büscher et al. 2014); a disruptive innovative force that is challenging the social, economic, political and organisational practices that shape disaster response.

      Cited paper references social media and DHN work.

    11. Since the data is already being collected on a regular basis by ubiquitous private firms, it is thought to contain information that will increase opportunities for intelligence gathering and thereby security. This marks a shift from surveillance to ‘dataveillance’ (van Dijck 2014), where the impetus for data processing is no longer motivated by specific purposes or suspicions, but opportunistic discovery of anomalies that can be investigated. For crisis management this could mean benefits such as richer situation awareness, increased capacity for risk assess-ment, anticipation and prediction, as well as more agile response

      Dataveillance definition.

      The supposed benefits for crisis management don't correspond to the earlier criticisms about data quality, loss of contextualization, and predictive analytics accuracy.

      The following paragraph clears up some of the overly optimistic promises. Perhaps this section is simply overstated for rhetorical purposes.

    12. lthough Snowden’s revelations shocked the world and prompted calls for a public debate on issues of privacy and transparency

      I understand the desire to use a topical hook to explain a complex topic but referring to the highly contentious Snowden scandal as a frame seems risky (alienating) and could potentially undermine an important argument about the surveillance state should new revelations be revealed about his motives/credibility.

    13. While seemingly avoiding the traps of exerting top- down power over people the state does not yet have formal control over, and simultaneously providing support for self- determination and choice to empower individuals for self- sufficiency rather than defining them as vulnerable and passive recipients of top- down protection (Meier 2013), tying individual aid to mobile tracking puts refugees in a situation where their security is dependent upon individual choice and the private sector. Apart from disrupting traditional dynamics of responsibility for aid and protection, public–private sharing of intel-ligence brings new forms of dataveillance

      If the goal is to improve rapid/efficient response to those in need, is it necessarily only a dichotomy of top-down institutional action vs private sector/market-driven reaction? Surely, we can do better than this.

      Data/predictive analytics abuses by the private sector are legion.

      How does social construction vs technological determinism fit here? In what ways are the real traumas suffered by crisis-affected people not being taken into account during the response/relief/resiliency phases?

    14. However, with these big data collections, the focus becomes not the individu-al’s behaviour but social and economic insecurities, vulnerabilities and resilience in relation to the movement of such people. The shift acknowledges that what is surveilled is more complex than an individual person’s movements, communica-tions and actions over time.

      The shift from INGO emergency response/logistics to state-sponsored, individualized resilience via the private sector seems profound here.

      There's also a subtle temporal element here of surveilling need and collecting data over time.

      Again, raises serious questions about the use of predictive analytics, data quality/classification, and PII ethics.

    15. Andrejevic and Gates (2014: 190) suggest that ‘the target becomes the hidden patterns in the data, rather than particular individuals or events’. National and local authorities are not seeking to monitor individuals and discipline their behaviour but to see how many people will reach the country and when, so that they can accommodate them, secure borders, and identify long- term social out-looks such as education, civil services, and impacts upon the host community (Pham et al. 2015).

      This seems like a terribly naive conclusion about mass data collection by the state.

      Also:

      "Yet even if capacities to analyse the haystack for needles more adequately were available, there would be questions about the quality of the haystack, and the meaning of analysis. For ‘Big Data is not self-explanatory’ (Bollier 2010: 13, in boyd and Crawford 2012). Neither is big data necessarily good data in terms of quality or relevance (Lesk 2013: 87) or complete data (boyd and Crawford 2012)."

    16. as boyd and Crawford argue, ‘without taking into account the sample of a data set, the size of the data set is meaningless’ (2012: 669). Furthermore, many tech-niques used by the state and corporations in big data analysis are based on probabilistic prediction which, some experts argue, is alien to, and even incom-prehensible for, human reasoning (Heaven 2013). As Mayer-Schönberger stresses, we should be ‘less worried about privacy and more worried about the abuse of probabilistic prediction’ as these processes confront us with ‘profound ethical dilemmas’ (in Heaven 2013: 35).

      Primary problems to resolve regarding the use of "big data" in humanitarian contexts: dataset size/sample, predictive analytics are contrary to human behavior, and ethical abuses of PII.

    17. Second, this tracking and tracing of refugees has become a deeply ambiguous process in a world riven by political conflict, where ‘migration’ increasingly comes to be discussed in co- location with terrorism.

      Data collection process for refugees is underscored as threat surveillance, whether it is intended or not.

    18. Surveillance studies have tracked a shift from discipline to control (Deleuze 1992; Haggerty and Ericson 2000; Lyon 2014) exemplified by the shift from monitoring confined populations (through technologies such as the panopticon) to using new technologies to keep track of mobile populations.

      Design implication for ICT4D and ICT for humanitarian response -- moving beyond controlled environment surveillance to ubiquitous and omnipresent.

    19. As Coyle and Meier (2009) argue, disasters are often seen as crises of information where it is vital to make sure that people know where to find potable water, how to ask for help, where their relatives are, or if their home is at risk; as well as providing emergency response and human-itarian agencies with information about affected populations. Such a quest for information for ‘security’, in turn, provides fertile ground for a quest for technological solutions, such as big data, which open up opportunities for the extended surveillance of everyday life. The assumption is that if only enough information could be gathered and exchanged, preparedness, resilience and control would follow. This is particularly pertinent with regard to mobile pop-ulations (Adey and Kirby 2016)

      The Information is Aid perspective that drives my research agenda.

    20. hird, at this juncture, control is being equated with visibility and visibility with personal security. But how these individuals are made visible matters for both privacy and security, let alone the politics of conflating refugees, migration and terrorism. Indeed, working with specific data framing mechanisms affects how the causes and effects of disasters are identified and what elements and people are considered (Frickel 2008

      A finer point on threat surveillance that stems from how classifications and categories are framed.

      This also gets at post-colonial interpretations of people, places, and events.

      See: Winner, Do Artifacts Have Politics? See: Bowker and Star, Sorting things out: Classification and its consequences. See: Irani, Post-Colonial Computing

    21. First, there is a double dynamic to the generation of data in the refugee crisis.

      Data is used by the state to mobilize resources for protective services (border management and immigration/asylum systems) and data is used to count/track refugees in order to provision assistance.

    22. Datafication refers to the fact that ‘we can now capture and calculate at a much more comprehensive scale the physical and intangible aspects of existence and act on them’ (Mayer- Schönberger and Cukier 2013: 97

      Datafication definition

      It also incorporates metadata as well as information gleaned from typical sources.

    23. There is an uneasy coming together of diverse computational and human intelligences in these intersections, and the ambiguous nature of intelligence – understood, on the one hand, as a capacity for perceiving, learning and under-standing and, on the other, as information obtained for strategic purposes – marks complex relationships between ‘good’ and ‘dark’ aspects of big data, surveil-lance and crisis management.

      The promise and peril of gathering collective intelligence, surveillance, and capturing big data during humanitarian crises.

    1. Peer production successfully elicits contributions from diverse individu-als with diverse motivations – a quality that continues to distinguish it fromsimilar forms of collective intelligence

      Benkler makes a really bold statement here about how peer production differs from collective intelligence. Not sure I buy this argument.

      Brabner on crowdsourcing:

    2. Although peer production is central to social scientific and legal researchon collective intelligence, not all examples of collective intelligence created inonline systems are peer production. First, (1) collective intelligence can in-volve centralized control over goal-setting and execution of tasks.

      Not all collective intelligence is peer production.

      Peer production must adhere to values: de-centralized control, broad range of motives/incentives and FLOSS/creative commons rights.

    3. Consistent with this exam-ple, foundational social scientific research relevant to understanding collec-tive intelligence has focused on three central concerns: (1) explaining the or-ganization and governance of decentralized projects, (2) understanding themotivation of contributors in the absence of financial incentives or coerciveobligations, and (3) evaluating the quality of the products generated throughcollective intelligence systems.

      Focus of related work in collective intelligence studies:

      • organizational governance • motives • product quality

    4. Historically,researchers in diverse fields such as communication, sociology, law, and eco-nomics have argued that effective human systems organize people through acombination of hierarchical structures (e.g., bureaucracies), completely dis-tributed coordination mechanisms (e.g., markets), and social institutions ofvarious kinds (e.g., cultural norms). However, the rise of networked systemsand online platforms for collective intelligence has upended many of the as-sumptions and findings from this earlier research.

      Benkler argues that the process, motives, and cultural norms of online network-driven knowledge work are different than systems previously studied and should be re-evaluated.

  18. Jul 2018
    1. Leading thinkers in China argue that putting government in charge of technology has one big advantage: the state can distribute the fruits of AI, which would otherwise go to the owners of algorithms.
  19. Jun 2018
    1. In “Getting Real,” Barad proposes that “reality is sedimented out of the process ofmaking the world intelligible through certain practices and not others ...” (1998: 105). If,as Barad and other feminist researchers suggest, we are responsible for what exists, what isthe reality that current discourses and practices regarding new technologies makeintelligible, and what is excluded? To answer this question Barad argues that we need asimultaneous account of the relations of humans and nonhumansandof their asymmetriesand differences. This requires remembering that boundaries between humans and machinesare not naturally given but constructed, in particular historical ways and with particularsocial and material consequences. As Barad points out, boundaries are necessary for thecreation of meaning, and, for that very reason, are never innocent. Because the cuts impliedin boundary making are always agentially positioned rather than naturally occurring, andbecause boundaries have real consequences, she argues, “accountability is mandatory”(187). :We are responsible for the world in which we live not because it is an arbitraryconstruction of our choosing, but because it is sedimented out of particular practicesthat we have a role in shaping (1998: 102).The accountability involved is not, however, a matter of identifying authorship in anysimple sense, but rather a problem of understanding the effects of particular assemblages,and assessing the distributions, for better and worse, that they engender.
    2. Finally, the ‘smart’ machine's presentation of itself asthe always obliging, 'labor-saving device' erases any evidence of the labor involved in itsoperation "from bank personnel to software programmers to the third-world workers whoso often make the chips" (75).
    3. Chasin poses the question (which I return to below) of how a change in our view ofobjects from passiveand outside the social could help to undo the subject/object binaryand all of its attendant orderings, including for example male/female, or mental/manua
    4. Figured as servants,she points out, technologies reinscribe the difference between ‘us’ and those who serve us,while eliding the difference between the latter and machines: "The servanttroubles thedistinction between we-human-subjects-inventors with a lot to do (on the onehand) andthem-object-things that make it easier for us (on the other)" (1995: 73)
    1. "When tasks require high coordination because the work is highly interdependent, having more contributors can increase process losses, reducing the effectiveness of the group below what individual members could optimally accomplish". Having a team too large the overall effectiveness may suffer even when the extra contributors increase the resources. In the end the overall costs from coordination might overwhelm other costs.
    2. Games such as The Sims Series, and Second Life are designed to be non-linear and to depend on collective intelligence for expansion. This way of sharing is gradually evolving and influencing the mindset of the current and future generations.[117] For them, collective intelligence has become a norm.
    3. The UNU open platform for "human swarming" (or "social swarming") establishes real-time closed-loop systems around groups of networked users molded after biological swarms, enabling human participants to behave as a unified collective intelligence.[140][141] When connected to UNU, groups of distributed users collectively answer questions and make predictions in real-time.[142] Early testing shows that human swarms can out-predict individuals.[140] In 2016, an UNU swarm was challenged by a reporter to predict the winners of the Kentucky Derby, and successfully picked the first four horses, in order, beating 540 to 1 odds.
    4. Research performed by Tapscott and Williams has provided a few examples of the benefits of collective intelligence to business:[38] Talent utilization At the rate technology is changing, no firm can fully keep up in the innovations needed to compete. Instead, smart firms are drawing on the power of mass collaboration to involve participation of the people they could not employ. This also helps generate continual interest in the firm in the form of those drawn to new idea creation as well as investment opportunities.[38] Demand creation Firms can create a new market for complementary goods by engaging in open source community. Firms also are able to expand into new fields that they previously would not have been able to without the addition of resources and collaboration from the community. This creates, as mentioned before, a new market for complementary goods for the products in said new fields.[38] Costs reduction Mass collaboration can help to reduce costs dramatically. Firms can release a specific software or product to be evaluated or debugged by online communities. The results will be more personal, robust and error-free products created in a short amount of time and costs. New ideas can also be generated and explored by collaboration of online communities creating opportunities for free R&D outside the confines of the company.[38]
    5. In one high-profile example, a human swarm challenge by CBS Interactive to predict the Kentucky Derby. The swarm correctly predicted the first four horses, in order, defying 542–1 odds and turning a $20 bet into $10,800.
    6. To address the problems of serialized aggregation of input among large-scale groups, recent advancements collective intelligence have worked to replace serialized votes, polls, and markets, with parallel systems such as "human swarms" modeled after synchronous swarms in nature.
    7. While modern systems benefit from larger group size, the serialized process has been found to introduce substantial noise that distorts the collective output of the group. In one significant study of serialized collective intelligence, it was found that the first vote contributed to a serialized voting system can distort the final result by 34%
    8. To accommodate this shift in scale, collective intelligence in large-scale groups been dominated by serialized polling processes such as aggregating up-votes, likes, and ratings over time
    9. The idea of collective intelligence also forms the framework for contemporary democratic theories often referred to as epistemic democracy.
    10. The basis and goal of collective intelligence is mutual recognition and enrichment of individuals rather than the cult of fetishized or hypostatized communities."
    11. Collective intelligence (CI) is shared or group intelligence that emerges from the collaboration, collective efforts, and competition of many individuals and appears in consensus decision making.
  20. May 2018
    1. hi there get the full insights on MSBI tools training and tutorial with the Real time Examples and application on the Running Projects as well https://www.youtube.com/watch?v=OzmdY0zCw4g

    2. hi there Check this MSBI Tools training and tutorial insights with the real time Examples and projects analysis on the MSBI

      https://www.youtube.com/watch?v=EdF9tZliIok

    3. hi there learn MSBI in 20 min with handwritten explanation on each and every topics on the Course with real time examples

      https://www.youtube.com/watch?v=tFG-VkaSvhI

    4. Get the proper Explanation on the ETL testing Tools training and Tutorial Course with better Real time exercises and understanding of Testing Processes on different stages from Extraction to Loading of data in client location

      so check this link for better learning:- https://www.youtube.com/watch?v=-vNgcOsHbIU

    5. Get the best Explanation on Talend Training and Tutorial Course with Real time Experience and Exercises with Real time projects for better Hands on from the scratch to advance level

      so check this link and learn :- https://www.youtube.com/watch?v=lhTPrpBvakw

  21. Apr 2018
    1. The alternative, of a regulatory patchwork, would make it harder for the West to amass a shared stock of AI training data to rival China’s.

      Fascinating geopolitical suggestion here: Trans-Atlantic GDPR-like rules as the NATO of data privacy to effectively allow "the West" to compete against the People's Republic of China in the development of artificial intelligence.

  22. Feb 2018
    1. Our principal claim is that a valid EI concept can bedistinguished from other approaches. This valid conceptionof EI includes the ability to engage in sophisticated infor-mation processing about one’s own and others’ emotionsand the ability to use this information as a guide to thinkingand behavior.

      This is a really good definition imo.

    2. the termemotional intelligenceis now employedto cover too many things—too many different traits, toomany different concepts (Landy, 2005; Murphy & Side-man, 2006; Zeidner, Roberts, & Matthews, 2004). “Thesemodels,” wrote Daus and Ashkanasy (2003, pp. 69–70),“have done more harm than good regarding establishingemotional intelligence as a legitimate, empirical constructwith incremental validity potential.

      This idea might help us not oversimplify the term 'emotional intelligence.'

    3. The original definition of EI conceptualized it as a setof interrelated abilities (Mayer & Salovey, 1997; Salovey& Mayer, 1990). Yet other investigators have described EIas an eclectic mix of traits, many dispositional, such ashappiness, self-esteem, optimism, and self-management,rather than as ability based

      If they are dispositional and not ability-based then there are limitations.

    4. one commentator recently argued that EI is an invalidconcept in part because it is defined in too many ways(Locke, 2005, p. 425)

      We shouldn't claim there is one simple definition of EI.

    5. . The orig-inal idea was that some individuals possess the ability toreason about and use emotions to enhance thought moreeffectively than others

      first tentative notion of EI

  23. Jan 2018
  24. Dec 2017
    1. Most of the recent advances in AI depend on deep learning, which is the use of backpropagation to train neural nets with multiple layers ("deep" neural nets).

      Neural nets consist of layers of nodes, with edges from each node to the nodes in the next layer. The first and last layers are input and output. The output layer might only have two nodes, representing true or false. Each node holds a value representing how excited it is. Each edge has a value representing strength of connection, which determines how much of the excitement passes through.

      The edges in an untrained neural net start with random values. The training data consists of a series of samples that are already labeled. If the output is wrong, the edges are adjusted according to how much they contributed to the error. It's called backpropagation because it starts with the output nodes and works toward the input nodes.

      Deep neural nets can be effective, but only for single specific tasks. And they need huge sets of training data. They can also be tricked rather easily. Worse, someone who has access to the net can discover ways of adding noise to images that will make the net "see" things that obviously aren't there.

  25. Oct 2017
  26. Sep 2017
  27. Aug 2017
    1. So this transforms how we do design. The human engineer now says what the design should achieve, and the machine says, "Here's the possibilities." Now in her job, the engineer's job is to pick the one that best meets the goals of the design, which she knows as a human better than anyone else, using human judgment and expertise.

      A post on the Keras blog was talking about eventually using AI to generate computer programs to match certain specifications. Gruber is saying something very similar.

  28. Apr 2017
  29. Mar 2017
    1. Great overview and commentary. However, I would have liked some more insight into the ethical ramifications and potential destructiveness of an ASI-system as demonstrated in the movie.