485 Matching Annotations
  1. May 2019
    1. patents
    2. robot operating system downloads,
    3. he GLUE metric
    4. robot installations
    5. AI conference attendance
    6. the speed at which computers can be trained to detect objects

      Technical Performance

    7. quality of question answering

      Technical Performance

    8. changes in AI performance

      Technical Performance

    9. Technical Performance
    10. number of undergraduates studying AI

      Volume of Activity

    11. growth in venture capital funding of AI startups

      Volume of Activity

    12. percent of female applicants for AI jobs

      Volume of Activity

    13. Volume of Activity
    14. increased participation in organizations like AI4ALL and Women in Machine Learning
    15. producers of AI patents
    16. ML teaching events
    17. University course enrollment
    18. 83 percent of 2017 AI papers
  2. Apr 2019
    1. Ashley Norris is the Chief Academic Officer at ProctorU, an organization that provides online exam proctoring for schools. This article has an interesting overview of the negative side of technology advancements and what that has meant for student's ability to cheat. While the article does culminate as an ad, of sorts, for ProctorU, it is an interesting read and sparks thoughts on ProctorU's use of both human monitors for testing but also their integration of Artificial Intelligence into the process.

      Rating: 9/10.

  3. Mar 2019
    1. If you do not like the price you’re being offered when you shop, do not take it personally: many of the prices we see online are being set by algorithms that respond to demand and may also try to guess your personal willingness to pay. What’s next? A logical next step is that computers will start conspiring against us. That may sound paranoid, but a new study by four economists at the University of Bologna shows how this can happen.
    1. Worse still, even if we had the ability to take a snapshot of all of the brain’s 86 billion neurons and then to simulate the state of those neurons in a computer, that vast pattern would mean nothing outside the body of the brain that produced it. This is perhaps the most egregious way in which the IP metaphor has distorted our thinking about human functioning.

      Again, this doesn't conflict with a machine-learning or deep-learning or neural-net way of seeing IP.

    2. No ‘copy’ of the story is ever made

      Or, the copy initially made is changed over time since human "memory" is interdependent and interactive with other brain changes, whereas each bit in computer memory is independent of all other bits.

      However, machine learning probably results in interactions between bits as the learning algorithm is exposed to more training data. The values in a deep neural network interact in ways that are not so obvious. So this machine-human analogy might be getting new life with machine learning.

    3. The IP perspective requires the player to formulate an estimate of various initial conditions of the ball’s flight

      I don't see how this is true. The IP perspective depends on algorithms. There are many different algorithms to perform various tasks. Some perform reverse-kinematic calculations, but others conduct simpler, repeated steps. In computer science, this might be dynamic programming, recursive algorithms, or optimization. It seems that the IP metaphor still fits: it's just that those using the metaphor may not have updated their model of IP to be more modern.

    1. There is no wonder that AI gains popularity. A lot of facts and pros are the stimulators of such profitable growth of AI. The essential peculiarities are fully presented in the given article.

    1. we provide him as much help as possible in making a plan of action. Then we give him as much help as we can in carrying it out. But we also have to allow him to change his mind at almost any point, and to want to modify his plans.

      I'm thinking about the role of AI tutors/advisors here. How often do they operate in the kind of flexible way described here. I wonder if they can without actual human intervention.

  4. Feb 2019
    1. Nearly half of FBI rap sheets failed to include information on the outcome of a case after an arrest—for example, whether a charge was dismissed or otherwise disposed of without a conviction, or if a record was expunged

      This explains my personal experience here: https://hyp.is/EIfMfivUEem7SFcAiWxUpA/epic.org/privacy/global_entry/default.html (Why someone who had Global Entry was flagged for a police incident before he applied for Global Entry).

    2. Applicants also agree to have their fingerprints entered into DHS’ Automatic Biometric Identification System (IDENT) “for recurrent immigration, law enforcement, and intelligence checks, including checks against latent prints associated with unsolved crimes.

      Intelligence checks is very concerning here as it suggests pretty much what has already been leaked, that the US is running complex autonomous screening of all of this data all the time. This also opens up the possibility for discriminatory algorithms since most of these are probably rooted in machine learning techniques and the criminal justice system in the US today tends to be fairly biased towards certain groups of people to begin with.

    3. It cited research, including some authored by the FBI, indicating that “some of the biometrics at the core of NGI, like facial recognition, may misidentify African Americans, young people, and women at higher rates than whites, older people, and men, respectively.

      This re-affirms the previous annotation that the set of training data for the intelligence checks the US runs on global entry data is biased towards certain groups of people.

  5. Jan 2019
    1. AI Robots will be replacing the White Collar Jobs by 6% until 2021

      AI software and the chatbots will be included in the current technologies and have automated with the robotic system. They will have given rights to access calendars, email accounts, browsing history, playlists, past purchases, and media viewing history. 6% is the huge number in the world as people would be seen struggling in finding the jobs. But there are benefits also as your work would have done easily and speedily

    1. CTP synthesizes critical reflection with technology production as a way of highlighting and altering unconsciously-held assumptions that are hindering progress in a technical field.

      Definition of critical technical practice.

      This approach is grounded in AI rather than HCI

      (verbatim from the paper) "CTP consists of the following moves:

      • identifying the core metaphors of the field

      • noticing what, when working with those metaphors, remains marginalized

      • inverting the dominant metaphors to bring that margin to the center

      • embodying the alternative as a new technology

  6. Dec 2018
    1. Our under-standing of the gap is driven by technological exploration through artifact cre-ation and deployment, but HCI and CSCW systems need to have at their corea fundamental understanding of how people really work and live in groups, or-ganizations, communities, and other forms of collective life. Otherwise, wewill produce unusable systems, badly mechanizing and distorting collabora-tion and other social activity.

      The risk of CSCW not driving toward a more scientific pursuit of social theory, understanding, and ethnomethodology and instead simply building "cool toys"

    2. The gap is also CSCW’s unique contribution. CSCW exists intellectually atthe boundary and interaction of technology and social settings. Its unique intel-lectual importance is at the confluence of technology and the social, and its

      CSCW's potential to become a science of the artificial resides in the study of interactions between society and technology

    3. Nonetheless, several guiding questions are required based on thesocial–technical gap and its role in any CSCW science of the artificial:• When can a computational system successfully ignore the need fornuance and context?• When can a computational system augment human activity withcomputer technologies suitably to make up for the loss in nuance andcontext, as argued in the approximation section earlier?• Can these benefits be systematized so that we know when we are add-ing benefit rather than creating loss?• What types of future research will solve some of the gaps betweentechnical capabilities and what people expect in their full range of so-cial and collaborative activities?

      Questions to consider in moving CSCW toward a science of the artificial

    4. The final first-order approximation is the creation of technical architecturesthat do not invoke the social–technical gap; these architectures neither requireaction nor delegate it. Instead, these architectures provide supportive oraugmentative facilities, such as advice, to users.

      Support infrastructures provide a different type of approximation to augment the user experience.

    5. Another approximation incorporates new computational mechanisms tosubstitute adequately for social mechanisms or to provide for new social issues(Hollan & Stornetta, 1992).

      Approximate a social need with a technical cue. Example in Google Docs of anonymous user icons on page indicates presence but not identity.

    6. First-order approximations, to adopt a metaphor from fluid dynamics, aretractable solutions that partially solve specific problems with knowntrade-offs.

      Definition of first-order approximations.

      Ackerman argues that CSCW needs a set of approximations that drive the development of initial work-arounds for the socio-technical gaps.

      Essentially, how to satisfy some social requirements and then approximate the trade-offs. Doesn't consider the product a solution in full but something to iterate and improve

      This may have been new/radical thinking 20 years ago but seems to have been largely adopted by the CSCW community

    7. Similarly, an educational perspective would argue that programmers andusers should understand the fundamental nature of the social requirements.

      Ackerman argues that CS education should include understanding how to design/build for social needs but also to appreciate the social impacts of technology.

    8. CSCW’s science, however, must centralize the necessary gap between whatwe would prefer to construct and what we can construct. To do this as a practi-cal program of action requires several steps—palliatives to ameliorate the cur-rent social conditions, first-order approximations to explore the design space,and fundamental lines of inquiry to create the science. These steps should de-velop into a new science of the artificial. In any case, the steps are necessary tomove forward intellectually within CSCW, given the nature of the social–tech-nical gap.

      Ackerman sets up the steps necessary for CSCW to become a science of the artificial and to try to resolve the socio-technical gap:

      Palliatives to ameliorate social conditions

      Approximations to explore the design space

      Lines of scientific inquiry

    9. Ideological initiatives include those that prioritize the needs of the peopleusing the systems.

      Approaches to address social conditions and "block troublesome impacts":

      Stakeholder analysis

      Participatory design

      Scandinavian approach to info system design requires trade union involvement

    10. Simon’s (1969/1981) book does not address the inevitable gaps betweenthe desired outcome and the means of producing that outcome for anylarge-scale design process, but CSCW researchers see these gaps as unavoid-able. The social–technical gap should not have been ignored by Simon.Yet, CSCW is exactly the type of science Simon envisioned, and CSCW couldserve as a reconstruction and renewal of Simon’s viewpoint, suitably revised. Asmuch as was AI, CSCW is inherently a science of the artificial,

      How Ackerman sees CSCW as a science of the artificial:

      "CSCW is at once an engineering discipline attempting to construct suitable systems for groups, organizations, and other collectivities, and at the same time, CSCW is a social science attempting to understand the basis for that construction in the social world (or everyday experience)."

    11. At a simple level,CSCW’s intellectual context is framed by social constructionism andethnomethodology (e.g., Berger & Luckmann, 1966; Garfinkel, 1967), systemstheories (e.g., Hutchins, 1995a), and many large-scale system experiences (e.g.,American urban renewal, nuclear power, and Vietnam). All of these pointed tothe complexities underlying any social activity, even those felt to be straightfor-ward.

      Succinct description of CSCW as social constructionism, ethnomethodlogy, system theory and large-scale system implementation.

    12. Yet,The Sciences of the Artificialbecame an an-them call for artificial intelligence and computer science. In the book he ar-gued for a path between the idea for a new science (such as economics orartificial intelligence) and the construction of that new science (perhaps withsome backtracking in the creation process). This argument was both charac-teristically logical and psychologically appealing for the time.

      Simon defines "Sciences of the Artificial" as new sciences/disciplines that synthesize knowledge that is technically or socially constructed or "created and maintained through human design and agency" as opposed to the natural sciences

    13. The HCI and CSCW research communitiesneed to ask what one might do to ameliorate the effects of the gap and to fur-ther understand the gap. I believe an answer—and a future HCI challenge—is toreconceptualize CSCW as a science of the artificial. This echoes Simon (1981)but properly updates his work for CSCW’s time and intellectual task.2

      Ackerman describes "CSCW as a science of the artificial" as a potential approach to reduce the socio-technical gap

  7. Nov 2018
    1. Entscheidend ist, dass sie Herren des Verfahrens bleiben - und eine Vision für das neue Maschinenzeitalter entwickeln.

      Es sieht für mich nicht eigentlich so aus als wären wir jemals die "Herren des Verfahrens" gewesen. Und auch darum geht es ja bei Marx. Denke ich.

  8. Sep 2018
    1. And its very likely that IA is a much easier road to the achievement of superhumanity than pure AI. In humans, the hardest development problems have already been solved. Building up from within ourselves ought to be easier than figuring out what we really are and then building machines that are all of that.

      The authors of the text are proposing a radically different approach to the inevitable "singularity" event. They propose the research and development IA, or Intelligence Amplification, is developing computers with a symbiosis with humans. Noting that IA could be easier to develop than AI algorithms, since humanity had to probe what their true weaknesses and strengths are. In turn, developing an IA system that could cover humanities' weaknesses. This would summarily prevent an IA algorithm from getting over itself, which could potentially slow a point when we reach singularity.

  9. Jul 2018
    1. Leading thinkers in China argue that putting government in charge of technology has one big advantage: the state can distribute the fruits of AI, which would otherwise go to the owners of algorithms.
  10. Jun 2018
    1. In “Getting Real,” Barad proposes that “reality is sedimented out of the process ofmaking the world intelligible through certain practices and not others ...” (1998: 105). If,as Barad and other feminist researchers suggest, we are responsible for what exists, what isthe reality that current discourses and practices regarding new technologies makeintelligible, and what is excluded? To answer this question Barad argues that we need asimultaneous account of the relations of humans and nonhumansandof their asymmetriesand differences. This requires remembering that boundaries between humans and machinesare not naturally given but constructed, in particular historical ways and with particularsocial and material consequences. As Barad points out, boundaries are necessary for thecreation of meaning, and, for that very reason, are never innocent. Because the cuts impliedin boundary making are always agentially positioned rather than naturally occurring, andbecause boundaries have real consequences, she argues, “accountability is mandatory”(187). :We are responsible for the world in which we live not because it is an arbitraryconstruction of our choosing, but because it is sedimented out of particular practicesthat we have a role in shaping (1998: 102).The accountability involved is not, however, a matter of identifying authorship in anysimple sense, but rather a problem of understanding the effects of particular assemblages,and assessing the distributions, for better and worse, that they engender.
    2. Finally, the ‘smart’ machine's presentation of itself asthe always obliging, 'labor-saving device' erases any evidence of the labor involved in itsoperation "from bank personnel to software programmers to the third-world workers whoso often make the chips" (75).
    3. Chasin poses the question (which I return to below) of how a change in our view ofobjects from passiveand outside the social could help to undo the subject/object binaryand all of its attendant orderings, including for example male/female, or mental/manua
    4. Figured as servants,she points out, technologies reinscribe the difference between ‘us’ and those who serve us,while eliding the difference between the latter and machines: "The servanttroubles thedistinction between we-human-subjects-inventors with a lot to do (on the onehand) andthem-object-things that make it easier for us (on the other)" (1995: 73)
  11. Apr 2018
    1. The alternative, of a regulatory patchwork, would make it harder for the West to amass a shared stock of AI training data to rival China’s.

      Fascinating geopolitical suggestion here: Trans-Atlantic GDPR-like rules as the NATO of data privacy to effectively allow "the West" to compete against the People's Republic of China in the development of artificial intelligence.

  12. Mar 2018
    1. The concentration of skills in certain countries and global companies could lead to a situation where other (native) companies are crowded out.

      Riesgo de la falta de desarrollo de capacidades informacionales, pero o por lo mínimo, sino por lo máximo: No se trata de gente que no sabe usar un computador y debe aprenderlo, sino de aquellos que desarrollan inteligencia artificial y no tienen la capacidad de llegar hasta lo más alto en esta escala.

    2. Artificial intelligence (AI), machine learning and deep learning

      Explicación gráfica de artificial intelligence, machine learning y deep learning

    3. we will use an inclusive definition of intelligence as ‘problem-solving’ and consider ‘an intelligent system’ to be one which takes the best possible action in a given situation.

      Inteligencia:

      • Capacidad de resolver problemas.
      • Capacidad de elegir la mejor opción en una determinada situación.
  13. Feb 2018
    1. ecent article in the Scientific Americanasked whether democracy will survive big data and artificial intel

      Ciertamente los temas relacionados con datos, inteligencia artificial y algoritmos son de mucho interés en la actualidad, frente al uso de tecnologías. La Web Foundation tiene documentos al respecto.

  14. Jan 2018
    1. ante la destructividad naturalizada que ha acompañado el Antropoceno y ante la aparición de lo artificial como el modo ineluctable de la vida humana, necesitamos oponer el cultivo de modos de devenir cualitativamente nuevos a través del potencial futurizante ofrecido por lo artificial. En este caso ‘posibilidad’ significa “la negociación con la realidad y no una escalada de lo que es”
  15. Dec 2017
    1. Most of the recent advances in AI depend on deep learning, which is the use of backpropagation to train neural nets with multiple layers ("deep" neural nets).

      Neural nets consist of layers of nodes, with edges from each node to the nodes in the next layer. The first and last layers are input and output. The output layer might only have two nodes, representing true or false. Each node holds a value representing how excited it is. Each edge has a value representing strength of connection, which determines how much of the excitement passes through.

      The edges in an untrained neural net start with random values. The training data consists of a series of samples that are already labeled. If the output is wrong, the edges are adjusted according to how much they contributed to the error. It's called backpropagation because it starts with the output nodes and works toward the input nodes.

      Deep neural nets can be effective, but only for single specific tasks. And they need huge sets of training data. They can also be tricked rather easily. Worse, someone who has access to the net can discover ways of adding noise to images that will make the net "see" things that obviously aren't there.

  16. Aug 2017
    1. So this transforms how we do design. The human engineer now says what the design should achieve, and the machine says, "Here's the possibilities." Now in her job, the engineer's job is to pick the one that best meets the goals of the design, which she knows as a human better than anyone else, using human judgment and expertise.

      A post on the Keras blog was talking about eventually using AI to generate computer programs to match certain specifications. Gruber is saying something very similar.

  17. Jun 2017
  18. Apr 2017
  19. Mar 2017
    1. Great overview and commentary. However, I would have liked some more insight into the ethical ramifications and potential destructiveness of an ASI-system as demonstrated in the movie.

  20. Feb 2017
  21. Jan 2017
    1. According to a 2015 report by Incapsula, 48.5% of all web traffic are by bots.

      ...

      The majority of bots are "bad bots" - scrapers that are harvesting emails and looking for content to steal, DDoS bots, hacking tools that are scanning websites for security vulnerabilities, spammers trying to sell the latest diet pill, ad bots that are clicking on your advertisements, etc.

      ...

      Content on websites such as dev.to are reposted elsewhere, word-for-word, by scrapers programmed by Black Hat SEO specialists.

      ...

      However, a new breed of scrapers exist - intelligent scrapers. They can search websites for sentences containing certain keywords, and then rewrite those sentences using "article spinning" techniques.

  22. Dec 2016
    1. The team on Google Translate has developed a neural network that can translate language pairs for which it has not been directly trained. "For example, if the neural network has been taught to translate between English and Japanese, and English and Korean, it can also translate between Japanese and Korean without first going through English."

  23. Sep 2016
  24. Jun 2016
  25. May 2016
  26. Apr 2016
    1. We should have control of the algorithms and data that guide our experiences online, and increasingly offline. Under our guidance, they can be powerful personal assistants.

      Big business has been very militant about protecting their "intellectual property". Yet they regard every detail of our personal lives as theirs to collect and sell at whim. What a bunch of little darlings they are.

  27. Jan 2016
  28. Dec 2015
    1. OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.
    1. Big Sur is our newest Open Rack-compatible hardware designed for AI computing at a large scale. In collaboration with partners, we've built Big Sur to incorporate eight high-performance GPUs
  29. Nov 2015
    1. TPOT is a Python tool that automatically creates and optimizes machine learning pipelines using genetic programming. Think of TPOT as your “Data Science Assistant”: TPOT will automate the most tedious part of machine learning by intelligently exploring thousands of possible pipelines, then recommending the pipelines that work best for your data.

      https://github.com/rhiever/tpot TPOT (Tree-based Pipeline Optimization Tool) Built on numpy, scipy, pandas, scikit-learn, and deap.

  30. Jul 2015
  31. May 2015
    1. In this work, Lee and Brunskill fit a separate Knowledge Tracing model to each student’s data. This involv ed fitting four parameters: initial probability o f mastery, probability of transitioning from unmastered to mastered, probability of giving an incorrect answer if the student has mastered the skill, and probability of giving a correct answer if the student has not mastered the skill. Each student’s model is fit using a combination of Expectation Maximization (EM) combined with a brute force search

      First comment

  32. Nov 2014
    1. The Most Terrifying Thought Experiment of All Time

      TLDR: Thought experiment that, by knowing about it, you are contributing to humanity enslavement to a all powerful AI

  33. Feb 2014
    1. Point 3 is almost certainly the one that still bugs Doug. All sorts of mechanisms and utilities are around and used (source code control, registries, WWW search engines, and on and on), but the problem of indexing and finding relevant information is tougher today than ever before, even on one's own hard disk, let alone the WWW.

      I would agree that "the problem of indexing and finding relevant information is tougher today than ever before" ... and especially "on one's own hard disk".

      Vannevar Bush recognized the problem of artificial systems of indexing long before McIlroy pulled this page from his typewriter in 1964, and here we are 50 years later using the same kind of filesystem indexing systems and wondering why it's harder than ever to find information on our own hard drives.

    1. The real heart of the matter of selection, however, goes deeper than a lag in the adoption of mechanisms by libraries, or a lack of development of devices for their use. Our ineptitude in getting at the record is largely caused by the artificiality of systems of indexing. When data of any sort are placed in storage, they are filed alphabetically or numerically, and information is found (when it is) by tracing it down from subclass to subclass. It can be in only one place, unless duplicates are used; one has to have rules as to which path will locate it, and the rules are cumbersome. Having found one item, moreover, one has to emerge from the system and re-enter on a new path. The human mind does not work that way. It operates by association. With one item in its grasp, it snaps instantly to the next that is suggested by the association of thoughts, in accordance with some intricate web of trails carried by the cells of the brain. It has other characteristics, of course; trails that are not frequently followed are prone to fade, items are not fully permanent, memory is transitory. Yet the speed of action, the intricacy of trails, the detail of mental pictures, is awe-inspiring beyond all else in nature.

      With the advent of Google Docs we're finally moving away from the archaic indexing mentioned here. The filesystem metaphor was simple and dominated how everyone manages their data-- which extended into how we developed web content, as well.

      The declaration that Hierarchical File Systems are Dead has led to better systems of tagging and search, but we're still far from where we need to be since there is still a heavy focus on the document as a whole instead of also the content within the document.

      The linearity of printed books is even more treacherously entrenched in our minds than the classification systems used by libraries to store those books.

      One day maybe we'll liberate every piece of content from every layer of its concentric cages: artificial systems of indexing, books, web pages, paragraphs, even sentences and words themselves. Only then will we be able to re-dress those thoughts automatically into those familiar and comforting forms that keep our thoughts caged.