180 Matching Annotations
  1. Last 7 days
    1. Hans Moravec argued in 1976 that computers were still millions of times too weak to exhibit intelligence. He suggested an analogy: artificial intelligence requires computer power in the same way that aircraft require horsepower. Below a certain threshold, it's impossible, but, as power increases, eventually it could become easy.[79] With regard to computer vision, Moravec estimated that simply matching the edge and motion detection capabilities of human retina in real time would require a general-purpose computer capable of 109 operations/second (1000 MIPS).[80] As of 2011, practical computer vision applications require 10,000 to 1,000,000 MIPS. By comparison, the fastest supercomputer in 1976, Cray-1 (retailing at $5 million to $8 million), was only capable of around 80 to 130 MIPS, and a typical desktop computer at the time achieved less than 1 MIPS.
  2. Dec 2019
    1. This is not a new idea. It is based on the vision expounded by Vannevar Bush in his 1945 essay “As We May Think,” which conjured up a “memex” machine that would remember and connect information for us mere mortals. The concept was refined in the early 1960s by the Internet pioneer J. C. R. Licklider, who wrote a paper titled “Man-Computer Symbiosis,” and the computer designer Douglas Engelbart, who wrote “Augmenting Human Intellect.” They often found themselves in opposition to their colleagues, like Marvin Minsky and John McCarthy, who stressed the goal of pursuing artificial intelligence machines that left humans out of the loop.

      Seymour Papert, had an approach that provides a nice synthesis between these two camps, buy leveraging early childhood development to provide insights on the creation of AI.

    2. Thompson’s point is that “artificial intelligence” — defined as machines that can think on their own just like or better than humans — is not yet (and may never be) as powerful as “intelligence amplification,” the symbiotic smarts that occur when human cognition is augmented by a close interaction with computers.

      Intelligence amplification over artificial intelligence. In reality you can't get to AI until you've mastered IA.

    1. Four databases of citizen science and crowdsourcing projects —  SciStarter, the Citizen Science Association (CSA), CitSci.org, and the Woodrow Wilson International Center for Scholars (the Wilson Center Commons Lab) — are working on a common project metadata schema to support data sharing with the goal of maintaining accurate and up to date information about citizen science projects.  The federal government is joining this conversation with a cross-agency effort to promote citizen science and crowdsourcing as a tool to advance agency missions. Specifically, the White House Office of Science and Technology Policy (OSTP), in collaboration with the U.S. Federal Community of Practice for Citizen Science and Crowdsourcing (FCPCCS),is compiling an Open Innovation Toolkit containing resources for federal employees hoping to implement citizen science and crowdsourcing projects. Navigation through this toolkit will be facilitated in part through a system of metadata tags. In addition, the Open Innovation Toolkit will link to the Wilson Center’s database of federal citizen science and crowdsourcing projects.These groups became aware of their complementary efforts and the shared challenge of developing project metadata tags, which gave rise to the need of a workshop.  

      Sense Collective's Climate Tagger API and Pool Party Semantic Web plug-in are perfectly suited to support The Wilson Center's metadata schema project. Creating a common metadata schema that is used across multiple organizations working within the same domain, with similar (and overlapping) data and data types, is an essential step towards realizing collective intelligence. There is significant redundancy that consumes limited resources as organizations often perform the same type of data structuring. Interoperability issues between organizations, their metadata semantics and serialization methods, prevent cumulative progress as a community. Sense Collective's MetaGrant program is working to provide a shared infastructure for NGO's and social impact investment funds and social impact bond programs to help rapidly improve the problems that are being solved by this awesome project of The Wilson Center. Now let's extend the coordinated metadata semantics to 1000 more organizations and incentivize the citizen science volunteers who make this possible, with a closer connection to the local benefits they produce through their efforts. With integration into Social impact Bond programs and public/private partnerships, we are able to incentivize collective action in ways that match the scope and scale of the problems we face.

  3. Nov 2019
    1. Tech Literacy Resources

      This website is the "Resources" archive for the IgniteED Labs at Arizona State University's Mary Lou Fulton Teachers College. The IgniteED Labs allow students, staff, and faculty to explore innovative and emerging learning technology such as virtual reality (VR), artifical intelligence (AI), 3-D printing, and robotics. The left side of this site provides several resources on understanding and effectively using various technologies available in the IgniteED labs. Each resources directs you to external websites, such as product tutorials on Youtube, setup guides, and the products' websites. The right column, "Tech Literacy Resources," contains a variety of guides on how students can effectively and strategically use different technologies. Resources include "how-to" user guides, online academic integrity policies, and technology support services. Rating: 9/10

    1. However, PIPA is the agency's first standalone bot, meaning it can be used across multiple government agencies. Crucially, the bot can be embedded within web and mobile apps, as well as within third-party personal assistants, such as Google Home and Alexa.  According to Keenan, the gang of five digital assistants released so far by the DHS have answered "more than 2.3 million questions, reducing the need for people to have to pick up a phone or come into a service centre for help.” “This is what our digital transformation program is all about – making life simpler and easier for all Australians.”

      Scope of PIPA

    1. uman Services has a number of public-facing chatbots already. The newest of them is ‘Charles’, launched last year, which offers support for the government’s MyGov service.Others include ‘Sam’ and ‘Oliver’, both of which launched in 2017. The department’s customer-facing digital assistants have so far answered more than 2.3 million questions. Human Services also uses a number of staff-facing chatbots. In November Keenan revealed that the department had launched an Augmented Intelligence Centre of Excellence, which the minister said would boost collaboration with industry, academia and other government entities.

      Chatbots that exist

    1. The federal government has decided that all Commonwealth entities would benefit from having a chatbot, with the Department of Human Services (DHS) announcing it was working on the development of one that will be ready by the end of 2019.The Platform Independent Personal Assistant -- PIPA -- is expected to "significantly improve the customer experience for users of online government services", according to Minister for Human Services and Digital Transformation Michael Keenan.

      Federal Government creating PIPA chatbot

    1. Before implementing Alex 2.5 years ago, IP Australia staffers were taking 12,000 calls per month."Now I'm not saying Alex was the only intervention we had, but it was one of the main ones. Acting on the insights we were getting from Alex, we're now down to 5,000 calls per month and still dropping," Stokes said. "The value for money and return on investment is quite good."

      IP Australia using chatbox named Alex to reduce calls received

    1. In 2001, AI founder Marvin Minsky asked "So the question is why didn't we get HAL in 2001?"[167] Minsky believed that the answer is that the central problems, like commonsense reasoning, were being neglected, while most researchers pursued things like commercial applications of neural nets or genetic algorithms. John McCarthy, on the other hand, still blamed the qualification problem.[168] For Ray Kurzweil, the issue is computer power and, using Moore's Law, he predicted that machines with human-level intelligence will appear by 2029.[169] Jeff Hawkins argued that neural net research ignores the essential properties of the human cortex, preferring simple models that have been successful at solving simple problems.[170] There were many other explanations and for each there was a corresponding research program underway.
    2. Eventually the earliest successful expert systems, such as XCON, proved too expensive to maintain. They were difficult to update, they could not learn, they were "brittle" (i.e., they could make grotesque mistakes when given unusual inputs), and they fell prey to problems (such as the qualification problem) that had been identified years earlier. Expert systems proved useful, but only in a few special contexts
    3. The neats: logic and symbolic reasoning[edit source] Logic was introduced into AI research as early as 1958, by John McCarthy in his Advice Taker proposal.[100] In 1963, J. Alan Robinson had discovered a simple method to implement deduction on computers, the resolution and unification algorithm. However, straightforward implementations, like those attempted by McCarthy and his students in the late 1960s, were especially intractable: the programs required astronomical numbers of steps to prove simple theorems.[101] A more fruitful approach to logic was developed in the 1970s by Robert Kowalski at the University of Edinburgh, and soon this led to the collaboration with French researchers Alain Colmerauer and Philippe Roussel who created the successful logic programming language Prolog.[102] Prolog uses a subset of logic (Horn clauses, closely related to "rules" and "production rules") that permit tractable computation. Rules would continue to be influential, providing a foundation for Edward Feigenbaum's expert systems and the continuing work by Allen Newell and Herbert A. Simon that would lead to Soar and their unified theories of cognition.[103] Critics of the logical approach noted, as Dreyfus had, that human beings rarely used logic when they solved problems. Experiments by psychologists like Peter Wason, Eleanor Rosch, Amos Tversky, Daniel Kahneman and others provided proof.[104] McCarthy responded that what people do is irrelevant. He argued that what is really needed are machines that can solve problems—not machines that think as people do.[105] The scruffies: frames and scripts[edit source] Among the critics of McCarthy's approach were his colleagues across the country at MIT. Marvin Minsky, Seymour Papert and Roger Schank were trying to solve problems like "story understanding" and "object recognition" that required a machine to think like a person. In order to use ordinary concepts like "chair" or "restaurant" they had to make all the same illogical assumptions that people normally made. Unfortunately, imprecise concepts like these are hard to represent in logic. Gerald Sussman observed that "using precise language to describe essentially imprecise concepts doesn't make them any more precise."[106] Schank described their "anti-logic" approaches as "scruffy", as opposed to the "neat" paradigms used by McCarthy, Kowalski, Feigenbaum, Newell and Simon.[107] In 1975, in a seminal paper, Minsky noted that many of his fellow "scruffy" researchers were using the same kind of tool: a framework that captures all our common sense assumptions about something. For example, if we use the concept of a bird, there is a constellation of facts that immediately come to mind: we might assume that it flies, eats worms and so on. We know these facts are not always true and that deductions using these facts will not be "logical", but these structured sets of assumptions are part of the context of everything we say and think. He called these structures "frames". Schank used a version of frames he called "scripts" to successfully answer questions about short stories in English.[108] Many years later object-oriented programming would adopt the essential idea of "inheritance" from AI research on frames.
    1. Bolt, Beranek and Newman (BBN) developed its own Lisp machine, named Jericho,[7] which ran a version of Interlisp. It was never marketed. Frustrated, the whole AI group resigned, and were hired mostly by Xerox. So, Xerox Palo Alto Research Center had, simultaneously with Greenblatt's own development at MIT, developed their own Lisp machines which were designed to run InterLisp (and later Common Lisp). The same hardware was used with different software also as Smalltalk machines and as the Xerox Star office system.
    2. In 1979, Russell Noftsker, being convinced that Lisp machines had a bright commercial future due to the strength of the Lisp language and the enabling factor of hardware acceleration, proposed to Greenblatt that they commercialize the technology.[citation needed] In a counter-intuitive move for an AI Lab hacker, Greenblatt acquiesced, hoping perhaps that he could recreate the informal and productive atmosphere of the Lab in a real business. These ideas and goals were considerably different from those of Noftsker. The two negotiated at length, but neither would compromise. As the proposed firm could succeed only with the full and undivided assistance of the AI Lab hackers as a group, Noftsker and Greenblatt decided that the fate of the enterprise was up to them, and so the choice should be left to the hackers. The ensuing discussions of the choice divided the lab into two factions. In February 1979, matters came to a head. The hackers sided with Noftsker, believing that a commercial venture fund-backed firm had a better chance of surviving and commercializing Lisp machines than Greenblatt's proposed self-sustaining start-up. Greenblatt lost the battle.
  4. Oct 2019
    1. We live in an age of paradox. Systems using artificial intelligence match or surpass human level performance in more and more domains, leveraging rapid advances in other technologies and driving soaring stock prices. Yet measured productivity growth has fallen in half over the past decade, and real income has stagnated since the late 1990s for a majority of Americans. Brynjolfsson, Rock, and Syverson describe four potential explanations for this clash of expectations and statistics: false hopes, mismeasurement, redistribution, and implementation lags. While a case can be made for each explanation, the researchers argue that lags are likely to be the biggest reason for paradox. The most impressive capabilities of AI, particularly those based on machine learning, have not yet diffused widely. More importantly, like other general purpose technologies, their full effects won't be realized until waves of complementary innovations are developed and implemented. The adjustment costs, organizational changes and new skills needed for successful AI can be modeled as a kind of intangible capital. A portion of the value of this intangible capital is already reflected in the market value of firms. However, most national statistics will fail to capture the full benefits of the new technologies and some may even have the wrong sign

      This is for anyone who is looking deep in economics of artificial intelligence or is doing a project on AI with respect to economics. This paper entails how AI might effect our economy and change the way we think about work. the predictions and facts which are stated here are really impressive like how people 30 years from now will be lively with government employment where everyone will get equal amount of payment.

    1. espite the potential of emerging technologies to assist persons with cognitive disabilities,significant practical impediments remain to be overcome in commercialization, consumerabandonment, and in the design and development of useful products. Barriers also exist in terms of the financial and organizational feasibility of specific envisionedproducts, and their limited potential to reach the consumer market. Innovative engineeringapproaches, effective needs analysis, user-centered design, and rapid evolutionary developmentare essential to ensure that technically feasible products meet the real needs of persons withcognitive disabilities. Efforts must be made by advocates, designers and manufacturers to promote betterintegration of future software and hardware systems so that forthcoming iterations of personalsupport technologies and assisted care systems technologies do not quickly become obsolete.They will need to operate seamlessly across multiple real-world environments in the home,school, community, and workplace

      This journal clearly explains the use of technologies with special aid people how a certain group can leverage it, while also touch basing on what are the challenges which special aid people face financially.

    1. Elon Musk.

      Eine entsprechend der Thematik angelehnte Diskussion zwischen Elon Musk und dem chinesischer Unternehmer Jack Ma über Künstlicher Intelligenz (englisch) Diskussion

    1. No matter how well you design a system, humans will end up surprising you with how they use it. “We make it obvious that it’s a bot, a digital assistant, at the start. But sometimes customers overlook that. And they’ll say, ‘are you a bot? What’s going on here? Transfer me through!’ And they’ll get into it quite strongly,” explains David Grilli, AGL’s chatbot product owner

      Interesting to note response to chatbots

  5. Sep 2019
    1. At the moment, GPT-2 uses a binary search algorithm, which means that its output can be considered a ‘true’ set of rules. If OpenAI is right, it could eventually generate a Turing complete program, a self-improving machine that can learn (and then improve) itself from the data it encounters. And that would make OpenAI a threat to IBM’s own goals of machine learning and AI, as it could essentially make better than even humans the best possible model that the future machines can use to improve their systems. However, there’s a catch: not just any new AI will do, but a specific type; one that uses deep learning to learn the rules, algorithms, and data necessary to run the machine to any given level of AI.

      This is a machine generated response in 2019. We are clearly closer than most people realize to machines that can can pass a text-based Turing Test.

    1. 75 countries already using the technology

      75 countries already use facial recognition

  6. Aug 2019
    1. HTM and SDR's - part of how the brain implements intelligence.

      "In this first introductory episode of HTM School, Matt Taylor, Numenta's Open Source Flag-Bearer, walks you through the high-level theory of Hierarchical Temporal Memory in less than 15 minutes."

    1. Machine learning is an approach to making many similar decisions that involves algorithmically finding patterns in your data and using these to react correctly to brand new data
    1. Semantic dictionaries are powerful not just because they move away from meaningless indices, but because they express a neural network’s learned abstractions with canonical examples. With image classification, the neural network learns a set of visual abstractions and thus images are the most natural symbols to represent them. Were we working with audio, the more natural symbols would most likely be audio clips. This is important because when neurons appear to correspond to human ideas, it is tempting to reduce them to words. Doing so, however, is a lossy operation — even for familiar abstractions, the network may have learned a deeper nuance. For instance, GoogLeNet has multiple floppy ear detectors that appear to detect slightly different levels of droopiness, length, and surrounding context to the ears. There also may exist abstractions which are visually familiar, yet that we lack good natural language descriptions for: for example, take the particular column of shimmering light where sun hits rippling water.

      nuance beyond words

    1. AI relies upon a bet. It is the bet that if you get your syntax (mechanism) right the semantics (meaning) will take care of itself. It is the hope that if computer engineers get the learning feedback process right, a new transhuman intellect will emerge.
  7. Jul 2019
    1. AI, especially in popular culture, is often a jumping-off point for dialogue with ourselves about what the future means, sometimes at the expense of understanding the present.
  8. Jun 2019
    1. By comparison, Amazon’s Best Seller badges, which flag the most popular products based on sales and are updated hourly, are far more straightforward. For third-party sellers, “that’s a lot more powerful than this Choice badge, which is totally algorithmically calculated and sometimes it’s totally off,” says Bryant.

      "Amazon's Choice" is made by an algorithm.

      Essentially, "Amazon" is Skynet.

  9. May 2019
    1. Humans act like a “liability sponge,” she says, absorbing all legal and moral responsibility in algorithmic accidents no matter how little or unintentionally they are involved.
    1. a working station that has a visual display screen some three feet on a side; this is his working surface, and is controlled by a computer (his "clerk") with which he can communicate by means of a small keyboard and various other devices

      Here's an example of a state of the art workstation in 1962.

      Tektronix 4014.jpg<br>By The original uploader was Rees11 at English Wikipedia. - Transferred from <span class="plainlinks">en.wikipedia</span> to Commons., CC BY-SA 2.5, Link

  10. Apr 2019
    1. India Not seen a major player

    2. Global AI Talent Report 2019

      India not to be seen in this. Women participation increasing.

    1. The agency is looking for industry vendors that can provide such a capability, which should also include “topic modeling; text categorization; text clustering; information extraction; named entity resolution; relationship extraction; sentiment analysis; and summarization,” and “may include statistical techniques that can provide a general understanding of the statutory and regulatory text as a whole.”

      AI is going to be used to help employees understand regulations. This is a good example to how AI is going to help us do our jobs better but it will also be risk of the employees missing out on crucial exposure and experience and in the end relying too much on the machine?

    1. The underlying guiding idea of a “trustworthy AI” is, first and foremost, conceptual nonsense. Machines are not trustworthy; only humans can be trustworthy (or untrustworthy). If, in the future, an untrustworthy corporation or government behaves unethically and possesses good, robust AI technology, this will enable more effective unethical behaviour.

      yikes

    1. We often think about AI “replacing us” with a vision of robots literally doing our jobs, but it’s not going to shake out in quite that way. Look at radiology, for example: with the advances in computer vision, people sometimes talk about AI replacing radiologists. We probably won’t ever get to the point where there’s zero human radiologists. But a very possible future is one where, out of 100 radiologists now, AI lets the top 5 or 10 of them do the job of all the rest. If such a scenario plays out, where does that leave the other 90 or so doctors?
    1. Machine learning techniques were originally designed for stationary and benign environments in which the training and test data are assumed to be generated from the same statistical distribution.

      the best thing ever!

  11. Mar 2019
    1. what EU leadership in AI could look like and what might be needed to get there.

      So, EU strategy is investing in ethical AI and by this avoiding direct competition with China and US but still having their place at the party?

    1. “Meditations on Moloch,”

      Clicked through to the essay. It appears to be mainly an argument for a super-powerful benevolent general artificial intelligence, of the sort proposed by AGI-maximalist Nick Bostrom.

      The money quote:

      The only way to avoid having all human values gradually ground down by optimization-competition is to install a Gardener over the entire universe who optimizes for human values.

      🔗 This is a great New Yorker profile of Bostrom, where I learned about his views.

      🔗Here is a good newsy profile from the Economist's magazine on the Google unit DeepMind and its attempt to create artificial general intelligence.

    1. There is no wonder that AI gains popularity. A lot of facts and pros are the stimulators of such profitable growth of AI. The essential peculiarities are fully presented in the given article.

    1. More people work in the shadow mines of content moderation than are officially employed by Facebook or Google. These are the people who keep our Disneyland version of the web spic and span.
  12. Feb 2019
    1. Algorithms will privilege some forms of ‘knowing’ over others, and the person writing that algorithm is going to get to decide what it means to know… not precisely, like in the former example, but through their values. If they value knowledge that is popular, then knowledge slowly drifts towards knowledge that is popular.

      I'm so glad I read Dave's post after having just read Rob Horning's great post, "The Sea Was Not a Mask", also addressing algorithms and YouTube.

    2. Some questions to use when discussing why we shouldn’t replace humans with AI (artificial intelligence) for learning

      Great discussion of what questions to ask about artificial intelligence and learning from Dave Cormier.

    1. The summation of human experience is being expanded at a prodigious rate

      The prodigious rate itself is expanding, is it a scale even conceivable at this time? (insert the usual stats of YouTube content growing at 300 hours a minute).

      I'm anxious to read if he anticipates the notion of turning to automation to try and handle this organization- it always seemed that Bush's vision was human focused.

    2. The conceptual framework we seek must orient us toward the real possibilities and problems associated with using modern technology to give direct aid to an individual in comprehending complex situations, isolating the significant factors, and solving problems.

      This problem of orientation is more true today than ever and I'm just not convinced that Silicon Valley (however well-intentioned) represents the right group to devise a framework to truly serve EVERYONE.

      Anyone interested in joining a grassroots effort to help influence those at the top? Let me know - wkendal-at-gmail

    3. executive capability.

      All of this focus on process, sub-process and sequencing keeps me thinking of machine-learning and concepts of AI. Seems this executive capability provides differentiation; human-learning.

    4. augmentation means

      New term for me; seems to break down how we interface with the world. A lot of HCI and learning theory baked in here. Heck, AI is baked in here.

    1. But every single photo on the site has been created by using a special kind of artificial intelligence algorithm called generative adversarial networks (GANs).

      These could be actual people. How would we know?

  13. Jan 2019
    1. AI Robots will be replacing the White Collar Jobs by 6% until 2021

      AI software and the chatbots will be included in the current technologies and have automated with the robotic system. They will have given rights to access calendars, email accounts, browsing history, playlists, past purchases, and media viewing history. 6% is the huge number in the world as people would be seen struggling in finding the jobs. But there are benefits also as your work would have done easily and speedily

    1. 假设另一种场景,我把它叫“运输机难题”——假设你是一场救灾行动的总指挥,正带着一支小队坐在一架装满物资的运输机中,这是唯一一架运输机,如果没准时到就会有上万灾民饿死病死,如果最终也没到那几十万人都活不成。但此时受恶劣天气影响飞机突然损坏了,承载不了这么大的重量,必须要有一半人跳下飞机(假设不能丢物资),否则可能机毁人亡,要不要让半支队伍跳下去?   这架运输机就是比特大陆,灾区的难民就是现在的币民。比特大陆若是完蛋,对行业造成的冲击又会让一大批币民破产出局。   试想一下,有一天比特大陆真的倒闭了,那么可以预见到,矿机将挥泪大甩卖,矿工将抛售手里的BTC、BCH,BCH奄奄一息,BTC跌到新一轮谷底。虽然还会有“灾后重建工作”,但一大批人都将倒在这场灾难里,看不到明天的太阳。   作为灾民,会不会因为心疼跳下去的半支队伍而甘愿饿死病死?作为币民,会不会因为心疼被裁掉的几百上千人而甘愿看着比特大陆倒闭,忍受自己哪怕只是短期的破产?

      <big>评:</big><br/><br/>「电车难题」曾引起旷日持久的讨论,而它的姊妹版「运输机难题」恐怕也一时难解。对于此类道德两难的选择困境,人们通常倾向于从自己的经验判断出发——主人公「搬动方向杆使车辆撞死一人」的行为,所受到的公众谴责要远小于主人公「站在轨道上方的天桥,为了救五人而故意把桥上的另一个人推下桥以逼停电车」的选择。那么对于「运输机难题」来说呢?还有没有比「一半机组成员跳下飞机」更优的解?<br/><br/>这样的讨论又让人联想到技术主义者在 AI 人工智能领域的意见分野:一派人认为 AI 最终的目的是取代人类,而另一派人的观点则坚信 AI 旨在增强人类(augmentation)。哪一派的话语权更大呢?答案并不重要。重要的是, be nice.

    1. The chances that they might miscommunicate and collide will therefore be far smaller.

      Theoretically yes, but however when we consider the number of Engineers, Developers or even Human - AI team pulling these services off might still be like the "drivers unfamiliar with the changing traffic regulations"

    2. The technology that favored democracy is changing, and as artificial intelligence develops, it might change further.

      i would like to see arguments around this as i further read.

    1. By utilizing the Deeplearning4j library1 for model representation, learning and prediction, KNIME builds upon a well performing open source solution with a thriving community.
    2. It is especially thanks to the work of Yann LeCun and Yoshua Bengio (LeCun et al., 2015) that the application of deep neural networks has boomed in recent years. The technique, which utilizes neural networks with many layers and enhanced backpropagation algorithms for learning, was made possible through both new research and the ever increasing performance of computer chips.
    3. One of KNIME's strengths is its multitude of nodes for data analysis and machine learning. While its base configuration already offers a variety of algorithms for this task, the plugin system is the factor that enables third-party developers to easily integrate their tools and make them compatible with the output of each other.
  14. Dec 2018
    1. I also arguelater that the challenge of the social–technical gap creates an opportunity to re-focus CSCW as a Simonian science of the artificial (where a science of the arti-ficial is suitably revised from Simon’s strictly empiricist grounds).

      Simonian Science of the Artificial refers to "a physical symbol system that has the necessary and sufficient means for intelligent action."

      From Simon, Herbert, "The Sciences of the Artificial," Third Edition (1996)

    1. Исследование должно было дать ответ на 9 главных вопросов о том, что предпочтительнее: 1. сохранить жизнь человека или животного; 2. сохранить курс или свернуть; 3. сохранить жизнь пассажиров или пешеходов; 4. наибольшего количества людей или наименьшего; 5. мужчин или женщин; 6. молодых или стариков; 7. толстых или худых; 8. пешеходов, переходящих дорогу в соответствии с ПДД, или пешеходов-нарушителей; 9. людей с высоким или низким социальным статусом.

      То есть в последствии машина будет анализировать каждого пассажира и пешехода, находящегося в непосредственной близости, чтобы потом принять решение, кто должен будет погибнуть?

  15. Nov 2018
  16. Oct 2018
    1. For all the talk about data and learning, Essa offered this blunt assessment: “Pretty much all edtech sucks. And machine learning is not going to improve edtech.” So what’s missing? “It’s not about the data, but how do we apply it. The reason why this technology sucks is because we don’t do good design. We need good design people to understand how this works.”

      I'm pretty sure this doesn't make any sense. Also, it is pretty funny.

  17. Sep 2018
    1. That’s Dr. Hunter, isn’t it? “By the Way do you mind if I ask you a personal question?

      HAL, a supposedly emotion feigning ultra-intelligent A.I., has just asked Dave if he could ask him a "personal question?" This should raise a concern in Dave, but it doesn't. Earlier in the film, during the BBC interview, the interviewer asked the Astronauts if HAL had emotions or if he was just faking it, their reply was that he was definitely programmed to feign emotions, however the fact whether if he actually had emotions or not remains a mystery. In this scene HAL acknowledges the existence of emotions by asking if he can ask a question that might incite a negative emotional response, a "personal question." This revelation should have frightened Dave, because it shows that HAL is more than a computer and is capable of more than just controlling the ship and maintaining optimal performance, HAL is capable of reading emotions and perhaps even capable of being afflicted by them.

    2. Hal, you have an enormous responsibility on this mission  perhaps the greatest responsibility of any single mission element. You’re the brain and central nervous system of the ship. Your responsibilities include watching over the men in hibernation. Does this ever cause you any lack of confidence?

      Hal is given complete control over the ship and everything inside it, even the people. It is in this way that he is beyond that of a tool. He controls, he is not controlled. As portrayed in the film he can kill any of the crew members any time, which he does, and advises the crew members of what they should do. This is perfectly described in "The Technological Singularity" where the authors states that a super-intelligent AI will be as much of a tool to humanity as we are tools to animals.

    3. – Do you know what happened? I’m sorry, Dave. I don’t have enough information.

      Hal is having a very human experience at this point in the film. Not only has he killed one of the cremates and intends to kill the other cremates, but he has some sense that it is wrong and it will lead to bad things for him. Even though he knows exactly what happened, he knows that it would be best for him to keep it away from Dave. This human experience only enhances when begins to die through the slow and monotonous process of being shut down. He begins to tell Dave that he can feel it and that he is afraid, showing that he has more than intelligence, but that he also has consciousness.

    1. Large computer networks (and their associated users) may “wake up” as superhumanly intelligent entities.

      The great "AI" has been around for a while now, we human are largely working on a computer machine to think for "itself". As fascinating as it sounds, aren't we just being lazy; depending on a robot to do the work for us. What will happen with the human race if these AI start producing more and better equipped AI. We have a brain that can produce so much if we just decide to do things on our own.

    2. performance curves beginning to level off – because of our inability to automate the design work needed to support further hardware improvements. Wed end up with some very powerful hardware, but without the ability to push it further

      Addressing the question of singularity, the author takes on an interesting perspective. One rationalization or opposing view is that technology is only as informational and intelligent as the creator itself. Just as the Mores conclude, "the computational competence of single neurons may be far higher than generally believed" and that "our present computer hardware might be [] 10 orders of magnitude short [compared to] our heads". This means that AI cannot surpass human intelligence as popularly believed. Rather, the article conjectures the possibility that if singularity were to occur, further innovation and improvements could never be made. I assume this is a biological and anatomical argument. Thus, implying that the technological constraints of AI cause it to be inferior to the biological makeup of the human brain. Thus, the author suggests that singularity can never really be fully realized.

    3. The maximum possible effectiveness of a software system increases in direct proportion to the log of the effectiveness (i.e., speed, bandwidth, memory capacity) of the underlying hardware.

      Simply stating that there will always be something restrictive about what technologies can do. Thus far in human technological advances there have not been a single database that can support a beyond human software. As stated in the quotes, the 'mind' of the piece of software is limited to all the effectiveness of the hardware, and by the time that humans are able to invent something that could effectively contain this non-human beyond human brain there would be some counter measures in placed to reduce the risk of an AI taking over the human race. The resource cost would also discourage for such experiment to be funded as it would be expensive to fund the researcher on creating compatible parts and programmers to develop something that would resemble that of a human mind but something more advance. Programming is also another problem, humans do not fully understand the human mind so there is a very unlikely chance that some programmer is able to accidentally write a line of code that make an AI be able to extend further than what a human can comprehend. The idea of a technology singularity stays a theory but this one single quote assures that the technology singularity is far from what is achievable.

  18. Aug 2018
    1. Habe ich einBuch, das für mich Verſtand hat, einen Seelſor¬ger, der für mich Gewiſſen hat, einen Arzt der fürmich die Diät beurtheilt, u. ſ. w. ſo brauche ich michja nicht ſelbſt zu bemühen. Ich habe nicht nöthigzu denken, wenn ich nur bezahlen kann; anderewerden das verdrießliche Geſchäft ſchon für michübernehmen.

      Kant über künstliche Intelligenz

  19. Jul 2018
    1. On the other hand, computers cannot read.

      This is entirely too complex an assertion to be made without support. It seems easy to understand, and yet it is not.

  20. Jun 2018
  21. May 2018
    1. “In short, they have no history of supporting the machine learning research community and instead they are viewed as part of the disreputable ecosystem of people hoping to hype machine learning to make money.”

      Whew. Hot.

    1. AI will also serve as a global economy booster, by contributing as much as $15.7 trillion to the world economy by 2030 due to productivity and personalization improvements.

    1. in search of a guiding philosophy

      Is it "in search of" or in avoidance of?

    2. rather than to comprehend them

      Thinking about instructional design here - how verbs like understand and appreciate are to be avoided in learning outcomes because they are difficult to measure - and wondering if this isn't an outcome.

    3. Philosophers and others in the field of the humanities who helped shape previous concepts of world order tend to be disadvantaged, lacking knowledge of AI’s mechanisms or being overawed by its capacities.

      They are also disadvantaged because their fields are undervalued and underappreciated.

    4. Who is responsible for the actions of AI? How should liability be determined for their mistakes? Can a legal system designed by humans keep pace with activities produced by an AI capable of outthinking and potentially outmaneuvering them?

      Politically, people have been pushing deregulation for decades, but we have regulations for a reason, as these questions illustrate.

    5. algorithms to personalize results and make them available to other parties for political or commercial purposes

      Algorithms personalize results for political/commercial purposes

    6. internet’s purpose is to ratify knowledge

      Ratification? What about augmenting intelligence?

    7. Human cognition loses its personal character. Individuals turn into data, and data become regnant

      Reminds me of The End of Theory. But if we lose the theory, the human understanding, what will be the consequences?

    8. order is now in upheaval

      Upheaval from anti-intellectualism as well as AI

    9. Would these machines learn to communicate with one another?

      Would Skynet) be born?

    10. His machine, he said, learned to master Go by training itself through practice

      The WOPR in War Games used tic-tac-toe, a game of futility. What does Go) teach a computer?

    1. Google's founding philosophy is that we don't know why this page is better than that one: If the statistics of incoming links say it is, that's good enough

      "Ours is not to reason why..."

  22. Apr 2018
  23. Dec 2017
  24. Nov 2017
  25. Oct 2017
    1. I can’t go on

      but I must go on!. Is this the future we are heading towards?

    1. Tim Urban

      Tim Urban was interviewed by Forbes in this article.

      Does not come across as an AI expert. Sounds like a casual blogger. We need to figure out what his background is that gives him so much gravitas writing an article like this, and why someone like Elon Musk would believe in what he says is true.

      Or, maybe he ghost wrote this for Elon?

  26. Sep 2017
    1. Đầu tiên mình nghĩ bạn cần nắm về machine learning và algorithm, bạn có thể bắt đầu bằng các khóa học trên mạng. Mình recommend khóa học Machine Learning của Andrew Ng, khóa học này được coi là kinh thánh cho data scientist. Sau đó bạn có thể bắt đầu với Python hoặc R và tham gia challenge trên Kaggle. Kaggle là một platform để Data Scientist tham gia, kiếm tiền thưởng và cạnh tranh thứ hạng với nhau. Nhiều người cũng nói với mình Kaggle là con đường tốt nhất và ngắn nhất để đến với Data Science.

      Học cơ bản

    1. when randomness is used, itis easy to lose accountability, since by definition any outcome which a randomized process couldhave produced is at least facially consistent with the design of that process

      problems randomization poses for accountability

    2. he power of computers is generally limited by a concept that computer scientists call noncomputability.58In short, certain types of problems cannot be solved by any computer program in any finite amount of time. There are many examples of noncomputable problems, but the most famous is Alan Turing’s “Halting Problem,” whichasks whether a given program will finish running (“halt”)

      Non computabilitty - cannot be solved by a program in an finite time

    3. Testing of any kind is, however, a fundamentally limited approach to determining whether any fact about a computer system is true or untrue.

      Limits of testing

    4. “black-box testing,” which considers only the inputs and outputs of a system or component, and “white-box testing,” in which the structure of the system’s internals is used to design test case
    5. dynamic methods are limited by the finite number of inputs that can be tested or outputs that can be observed
    6. Transparency advocatesoften claim that by reviewing a program’s disclosed source code, an analyst will be able to determine how a program behaves.47Indeed, the very idea that transparency allows outsiders to understand how a system functions is predicated on the usefulness of static analysis. But this claim is belied by the extraordinary difficulty of identifying even genuinely malicious code (“malware”), a task which has spawned a multibillion-dollar industry based largely on the careful review of code samples collected acrossthe internet.

      Limits of transparency - use of static analyses will have limited utility

    7. On the simplest level, some programming languages are designed to prevent certain classes of mistakes. For example, some are designed in such a way that it is impossible to make the mistake that caused the Heartbleed bug.45These techniques have also been deployed in the aviation industry, for example, to ensure that the software that provides guidance functionality on rockets, airplanes, satellites, and scientific probes does not ever crash, as software failures have caused the losses of several vehicles in the past
    8. static methods on their own say nothing about how a program interacts with its environment

      Issues with static methods - behavior of code may vary when used in different environments,

    9. Code can be complicated or obfuscated, and even expert analysis often misses eventual problems with the behavior of the program.

      Problems with static methods for testing

    10. two testing methodologies

      2 testing methodologies - static and dynamic

    11. Test Driven Development (TDD) is a software engineering methodology practiced by many major software companies

      Software development approaches with a view to test/analyze code

    1. Larry analyzes your historical and real-time data to create an entire social media strategy for you.

      The company is providing services for a large number of publishers worldwide. They basically write and send your content based tweets for you using deep learning.

  27. Jul 2017
    1. 这张图给出了谷歌在2015年提出的Inception-v3模型。这个模型在ImageNet数据集上可以达到95%的正确率。然而,这个模型中有2500万个参数,分类一张图片需要50亿次加法或者乘法运算。

      95%成功率,需要 25,000,000个参数!

  28. May 2017
    1. AlphaGo is going out on top. After beating Ke Jie, the world’s best player of the ancient Chinese board game Go, for the third time today at the Future of Go Summit in Wuzhen, Google’s DeepMind unit announced that it would be the last event match the AI plays.

      This makes me feel worse somehow than if it was going to continue to play. Seems like it is saying: well, tick the box for beating humans at Go...

    1. Unanimous A.I. used a technology called “swarm intelligence” to coordinate a group of racing fans to correctly predict the Kentucky Derby superfecta (the first four places, in order). The swarm beat 540-to-1 odds, along with the most-trusted handicappers in the world.

      Is this cheating? Is it legal?

  29. Apr 2017
    1. She and her colleagues are using neural networks—complex mathematical systems for identifying patterns in data—to recognize diabetic retino­pathy, a leading cause of blindness among US adults.

      Wow, this is a very interesting application!

    1. The first, say, one hour and thirty-five minutes of The Circle are enormously powerful, in an intelligent, worry-inducing kind of way. The film’s last fifteen minutes, which feel rushed, don’t quite measure up. The ending is ambiguous, confusing, and strangely open-ended. But maybe that’s only appropriate. It feels the most like reality.

      This is better than the NYT review said.