44 Matching Annotations
  1. Last 7 days
    1. Worse still, even if we had the ability to take a snapshot of all of the brain’s 86 billion neurons and then to simulate the state of those neurons in a computer, that vast pattern would mean nothing outside the body of the brain that produced it. This is perhaps the most egregious way in which the IP metaphor has distorted our thinking about human functioning.

      Again, this doesn't conflict with a machine-learning or deep-learning or neural-net way of seeing IP.

    2. No ‘copy’ of the story is ever made

      Or, the copy initially made is changed over time since human "memory" is interdependent and interactive with other brain changes, whereas each bit in computer memory is independent of all other bits.

      However, machine learning probably results in interactions between bits as the learning algorithm is exposed to more training data. The values in a deep neural network interact in ways that are not so obvious. So this machine-human analogy might be getting new life with machine learning.

    3. The IP perspective requires the player to formulate an estimate of various initial conditions of the ball’s flight

      I don't see how this is true. The IP perspective depends on algorithms. There are many different algorithms to perform various tasks. Some perform reverse-kinematic calculations, but others conduct simpler, repeated steps. In computer science, this might be dynamic programming, recursive algorithms, or optimization. It seems that the IP metaphor still fits: it's just that those using the metaphor may not have updated their model of IP to be more modern.

  2. Mar 2019
    1. There is no wonder that AI gains popularity. A lot of facts and pros are the stimulators of such profitable growth of AI. The essential peculiarities are fully presented in the given article.

    1. Walter Pitts was pivotal in establishing the revolutionary notion of the brain as a computer, which was seminal in the development of computer design, cybernetics, artificial intelligence, and theoretical neuroscience. He was also a participant in a large number of key advances in 20th-century science.
    1. we provide him as much help as possible in making a plan of action. Then we give him as much help as we can in carrying it out. But we also have to allow him to change his mind at almost any point, and to want to modify his plans.

      I'm thinking about the role of AI tutors/advisors here. How often do they operate in the kind of flexible way described here. I wonder if they can without actual human intervention.

  3. Feb 2019
    1. Nearly half of FBI rap sheets failed to include information on the outcome of a case after an arrest—for example, whether a charge was dismissed or otherwise disposed of without a conviction, or if a record was expunged

      This explains my personal experience here: https://hyp.is/EIfMfivUEem7SFcAiWxUpA/epic.org/privacy/global_entry/default.html (Why someone who had Global Entry was flagged for a police incident before he applied for Global Entry).

    2. Applicants also agree to have their fingerprints entered into DHS’ Automatic Biometric Identification System (IDENT) “for recurrent immigration, law enforcement, and intelligence checks, including checks against latent prints associated with unsolved crimes.

      Intelligence checks is very concerning here as it suggests pretty much what has already been leaked, that the US is running complex autonomous screening of all of this data all the time. This also opens up the possibility for discriminatory algorithms since most of these are probably rooted in machine learning techniques and the criminal justice system in the US today tends to be fairly biased towards certain groups of people to begin with.

    3. It cited research, including some authored by the FBI, indicating that “some of the biometrics at the core of NGI, like facial recognition, may misidentify African Americans, young people, and women at higher rates than whites, older people, and men, respectively.

      This re-affirms the previous annotation that the set of training data for the intelligence checks the US runs on global entry data is biased towards certain groups of people.

  4. Jan 2019
    1. AI Robots will be replacing the White Collar Jobs by 6% until 2021

      AI software and the chatbots will be included in the current technologies and have automated with the robotic system. They will have given rights to access calendars, email accounts, browsing history, playlists, past purchases, and media viewing history. 6% is the huge number in the world as people would be seen struggling in finding the jobs. But there are benefits also as your work would have done easily and speedily

    1. CTP synthesizes critical reflection with technology production as a way of highlighting and altering unconsciously-held assumptions that are hindering progress in a technical field.

      Definition of critical technical practice.

      This approach is grounded in AI rather than HCI

      (verbatim from the paper) "CTP consists of the following moves:

      • identifying the core metaphors of the field

      • noticing what, when working with those metaphors, remains marginalized

      • inverting the dominant metaphors to bring that margin to the center

      • embodying the alternative as a new technology

  5. Nov 2018
    1. Entscheidend ist, dass sie Herren des Verfahrens bleiben - und eine Vision für das neue Maschinenzeitalter entwickeln.

      Es sieht für mich nicht eigentlich so aus als wären wir jemals die "Herren des Verfahrens" gewesen. Und auch darum geht es ja bei Marx. Denke ich.

  6. Sep 2018
    1. And its very likely that IA is a much easier road to the achievement of superhumanity than pure AI. In humans, the hardest development problems have already been solved. Building up from within ourselves ought to be easier than figuring out what we really are and then building machines that are all of that.

      The authors of the text are proposing a radically different approach to the inevitable "singularity" event. They propose the research and development IA, or Intelligence Amplification, is developing computers with a symbiosis with humans. Noting that IA could be easier to develop than AI algorithms, since humanity had to probe what their true weaknesses and strengths are. In turn, developing an IA system that could cover humanities' weaknesses. This would summarily prevent an IA algorithm from getting over itself, which could potentially slow a point when we reach singularity.

  7. Jul 2018
    1. Leading thinkers in China argue that putting government in charge of technology has one big advantage: the state can distribute the fruits of AI, which would otherwise go to the owners of algorithms.
  8. Jun 2018
    1. In “Getting Real,” Barad proposes that “reality is sedimented out of the process ofmaking the world intelligible through certain practices and not others ...” (1998: 105). If,as Barad and other feminist researchers suggest, we are responsible for what exists, what isthe reality that current discourses and practices regarding new technologies makeintelligible, and what is excluded? To answer this question Barad argues that we need asimultaneous account of the relations of humans and nonhumansandof their asymmetriesand differences. This requires remembering that boundaries between humans and machinesare not naturally given but constructed, in particular historical ways and with particularsocial and material consequences. As Barad points out, boundaries are necessary for thecreation of meaning, and, for that very reason, are never innocent. Because the cuts impliedin boundary making are always agentially positioned rather than naturally occurring, andbecause boundaries have real consequences, she argues, “accountability is mandatory”(187). :We are responsible for the world in which we live not because it is an arbitraryconstruction of our choosing, but because it is sedimented out of particular practicesthat we have a role in shaping (1998: 102).The accountability involved is not, however, a matter of identifying authorship in anysimple sense, but rather a problem of understanding the effects of particular assemblages,and assessing the distributions, for better and worse, that they engender.
    2. Finally, the ‘smart’ machine's presentation of itself asthe always obliging, 'labor-saving device' erases any evidence of the labor involved in itsoperation "from bank personnel to software programmers to the third-world workers whoso often make the chips" (75).
    3. Chasin poses the question (which I return to below) of how a change in our view ofobjects from passiveand outside the social could help to undo the subject/object binaryand all of its attendant orderings, including for example male/female, or mental/manua
    4. Figured as servants,she points out, technologies reinscribe the difference between ‘us’ and those who serve us,while eliding the difference between the latter and machines: "The servanttroubles thedistinction between we-human-subjects-inventors with a lot to do (on the onehand) andthem-object-things that make it easier for us (on the other)" (1995: 73)
  9. Apr 2018
    1. The alternative, of a regulatory patchwork, would make it harder for the West to amass a shared stock of AI training data to rival China’s.

      Fascinating geopolitical suggestion here: Trans-Atlantic GDPR-like rules as the NATO of data privacy to effectively allow "the West" to compete against the People's Republic of China in the development of artificial intelligence.

  10. Dec 2017
    1. Most of the recent advances in AI depend on deep learning, which is the use of backpropagation to train neural nets with multiple layers ("deep" neural nets).

      Neural nets consist of layers of nodes, with edges from each node to the nodes in the next layer. The first and last layers are input and output. The output layer might only have two nodes, representing true or false. Each node holds a value representing how excited it is. Each edge has a value representing strength of connection, which determines how much of the excitement passes through.

      The edges in an untrained neural net start with random values. The training data consists of a series of samples that are already labeled. If the output is wrong, the edges are adjusted according to how much they contributed to the error. It's called backpropagation because it starts with the output nodes and works toward the input nodes.

      Deep neural nets can be effective, but only for single specific tasks. And they need huge sets of training data. They can also be tricked rather easily. Worse, someone who has access to the net can discover ways of adding noise to images that will make the net "see" things that obviously aren't there.

  11. Aug 2017
    1. So this transforms how we do design. The human engineer now says what the design should achieve, and the machine says, "Here's the possibilities." Now in her job, the engineer's job is to pick the one that best meets the goals of the design, which she knows as a human better than anyone else, using human judgment and expertise.

      A post on the Keras blog was talking about eventually using AI to generate computer programs to match certain specifications. Gruber is saying something very similar.

  12. Apr 2017
  13. Mar 2017
    1. Great overview and commentary. However, I would have liked some more insight into the ethical ramifications and potential destructiveness of an ASI-system as demonstrated in the movie.

  14. Feb 2017
  15. Jan 2017
    1. According to a 2015 report by Incapsula, 48.5% of all web traffic are by bots.

      ...

      The majority of bots are "bad bots" - scrapers that are harvesting emails and looking for content to steal, DDoS bots, hacking tools that are scanning websites for security vulnerabilities, spammers trying to sell the latest diet pill, ad bots that are clicking on your advertisements, etc.

      ...

      Content on websites such as dev.to are reposted elsewhere, word-for-word, by scrapers programmed by Black Hat SEO specialists.

      ...

      However, a new breed of scrapers exist - intelligent scrapers. They can search websites for sentences containing certain keywords, and then rewrite those sentences using "article spinning" techniques.

  16. Dec 2016
    1. The team on Google Translate has developed a neural network that can translate language pairs for which it has not been directly trained. "For example, if the neural network has been taught to translate between English and Japanese, and English and Korean, it can also translate between Japanese and Korean without first going through English."

  17. Sep 2016
  18. Jun 2016
  19. May 2016
  20. Apr 2016
    1. We should have control of the algorithms and data that guide our experiences online, and increasingly offline. Under our guidance, they can be powerful personal assistants.

      Big business has been very militant about protecting their "intellectual property". Yet they regard every detail of our personal lives as theirs to collect and sell at whim. What a bunch of little darlings they are.

  21. Jan 2016
  22. Dec 2015
    1. OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.
    1. Big Sur is our newest Open Rack-compatible hardware designed for AI computing at a large scale. In collaboration with partners, we've built Big Sur to incorporate eight high-performance GPUs
  23. Nov 2015
    1. TPOT is a Python tool that automatically creates and optimizes machine learning pipelines using genetic programming. Think of TPOT as your “Data Science Assistant”: TPOT will automate the most tedious part of machine learning by intelligently exploring thousands of possible pipelines, then recommending the pipelines that work best for your data.

      https://github.com/rhiever/tpot TPOT (Tree-based Pipeline Optimization Tool) Built on numpy, scipy, pandas, scikit-learn, and deap.

  24. Jul 2015
  25. May 2015
    1. In this work, Lee and Brunskill fit a separate Knowledge Tracing model to each student’s data. This involv ed fitting four parameters: initial probability o f mastery, probability of transitioning from unmastered to mastered, probability of giving an incorrect answer if the student has mastered the skill, and probability of giving a correct answer if the student has not mastered the skill. Each student’s model is fit using a combination of Expectation Maximization (EM) combined with a brute force search

      First comment

  26. Nov 2014
    1. The Most Terrifying Thought Experiment of All Time

      TLDR: Thought experiment that, by knowing about it, you are contributing to humanity enslavement to a all powerful AI