299 Matching Annotations
  1. Apr 2021
    1. Deep Reinforcement Learning and its Neuroscientific Implications In this paper, the authors provided a high-level introduction to deep RL, discussed some of its initial applications to neuroscience, and surveyed its wider implications for research on brain and behaviour and concluded with a list of opportunities for next-stage research. Although DeepRL seems to be promising, the authors wrote that it is still a work in progress and its implications in neuroscience should be looked at as a great opportunity. For instance, deep RL provides an agent-based framework for studying the way that reward shapes representation, and how representation, in turn, shapes learning and decision making — two issues which together span a large swath of what is most central to neuroscience.  Check the paper here.

      This should be of interest to the @braingel group and others interested in the intersections of AI and neuroscience.

  2. Mar 2021
    1. I can see what I was doing a handful of years ago or to see a forgotten picture of one of my children doing something cute
    1. The digital universe could add some 175 zettabytes of data per year by 2025, according to the market-analysis firm IDC.
    2. The process of DNA data storage combines DNA synthesis, DNA sequencing and an encoding and decoding algorithm to pack information into DNA more durably and at higher density than is possible in conventional media. That could be up to 17 exabytes per gram1.
    1. Using chemicals to improve our economy of attention and become emotionally "fitter" is an option that penetrated public consciousness some time ago.

      Same is true of reinforcement learning algorithms.

    2. They have become more significant because social interaction is governed by social convention to a much lesser extent than it was fifty years ago.

      Probably because everything is now alogrithmically mediated.

  3. Feb 2021
    1. Currently, the downsides of this merger are starting to become obvious, including the loss of privacy, political polarization, psycho‑logical manipulation, addictive use, social anxiety and distraction, misinformation, and mass narcissism.53

      Downsides of AI

    2. From a historical perspective of social change, the merger between biological and AI has already crossed beyond any point of return, at least from the social science perspective of society as a whole

      The AI / biology merger

    3. Advancements in the field of AI have been dazzling. AI has not only superseded humans in many intellectual tasks, like several kinds of cancer diagnosis47 and speech recognition (reducing AI’s word-error rate from 26% to 4% just between 2012 and 2016)

      Advancements in AI

    1. move away from viewing AI systems as passive tools that can be assessed purely through their technical architecture, performance, and capabilities. They should instead be considered as active actors that change and influence their environments and the people and machines around them.

      Agents don't have free will but they are influenced by their surroundings, making it hard to predict how they will respond, especially in real-world contexts where interactions are complex and can't be controlled.

    1. Koo's discovery makes it possible to peek inside the black box and identify some key features that lead to the computer's decision-making process.

      Moving towards "explainable AI".

    1. A primary goal of AI design should be not just alignment, but legibility, to ensure that the humans interacting with the AI know its goals and failure modes, allowing critique, reuse, constraint etc.

      Applying the thinking here to artificial intelligence...

  4. Jan 2021
    1. - To biznes, eksperci i obywatele są prawdziwymi twórcami polskiego ekosystemu AI. Państwo powinno przede wszystkim ich wspierać. W najbliższym czasie planujemy serię otwartych spotkań z każdą z tych grup, na których będziemy wspólnie pracować nad uszczegółowieniem – zapowiedział Antoni Rytel, wicedyrektor GovTech Polska. - Oprócz tego, specjalne zespoły będą zapewniać ciągłe wsparcie wszystkim tym podmiotom. Uruchomimy też kanał bieżącego zgłaszania pomysłów technicznych i organizacyjnych wspierających rozwój AI w naszym kraju – dodał.

      The first steps of developing AI in Poland

    2. W okresie krótkoterminowym decydujące dla sukcesu polityki sztucznej inteligencji będzie ochrona talentów posiadających zdolności modelowania wiedzy i analityki danych w systemach AI oraz wsparcie dla rozwoju własności intelektualnej wytwarzanej w naszym kraju – dodaje Robert Kroplewski, pełnomocnik ministra cyfryzacji ds. społeczeństwa informacyjnego.

      AI talents will be even more demanded in Poland

    3. Dokument określa działania i cele dla Polski w perspektywie krótkoterminowej (do 2023 r.), średnioterminowej (do 2027 r.) i długoterminowej (po 2027 r.). Podzieliliśmy je na sześć obszarów: AI i społeczeństwo – działania, które mają uczynić z Polski jednego z większych beneficjentów gospodarki opartej na danych, a z Polaków - społeczeństwo świadome konieczności ciągłego podnoszenia kompetencji cyfrowych. AI i innowacyjne firmy – wsparcie polskich przedsiębiorstw AI, m.in. tworzenie mechanizmów finansowania ich rozwoju, współpracy start up-ów z rządem. AI i nauka – wsparcie polskiego środowiska naukowego i badawczego w projektowaniu interdyscyplinarnych wyzwań lub rozwiązań w obszarze AI, m.in. działania mające na celu przygotowanie kadry ekspertów AI. AI i edukacja – działania podejmowane od kształcenia podstawowego, aż do poziomu uczelni wyższych – programy kursów dla osób zagrożonych utratą pracy na skutek rozwoju nowych technologii, granty edukacyjne. AI i współpraca międzynarodowa – działania na rzecz wsparcia polskiego biznesu w zakresie AI oraz rozwój technologii na arenie międzynarodowej. AI i sektor publiczny – wsparcie sektora publicznego w realizacji zamówień na rzecz AI, lepszej koordynacji działań oraz dalszym rozwoju takich programów jak GovTech Polska.

      AI priorities in Poland

    4. Rozwój AI w Polsce zwiększy dynamikę PKB o nawet 2,65 pp w każdym roku. Do 2030 r. pozwoli zautomatyzować ok. 49% czasu pracy w Polsce, generując jednocześnie lepiej płatne miejsca pracy w kluczowych sektorach.

      Prediction of developing AI in Poland

    1. Help is coming in the form of specialized AI processors that can execute computations more efficiently and optimization techniques, such as model compression and cross-compilation, that reduce the number of computations needed. But it’s not clear what the shape of the efficiency curve will look like. In many problem domains, exponentially more processing and data are needed to get incrementally more accuracy. This means – as we’ve noted before – that model complexity is growing at an incredible rate, and it’s unlikely processors will be able to keep up. Moore’s Law is not enough. (For example, the compute resources required to train state-of-the-art AI models has grown over 300,000x since 2012, while the transistor count of NVIDIA GPUs has grown only ~4x!) Distributed computing is a compelling solution to this problem, but it primarily addresses speed – not cost.
    1. At any rate, if CSHW can be used to build a good quantitative model of human-human interactions, it might also be possible to replicate these dynamics in human-computer interactions. This could take a weak form, such as building computer systems with a similar-enough interactional syntax to humans that some people could reach entrainment with it; affective computing done right.

      [[Aligning Recommender Systems]]

  5. Dec 2020
    1. The current public dialog about these issues too often uses “AI” as an intellectual wildcard, one that makes it difficult to reason about the scope and consequences of emerging technology. Let us begin by considering more carefully what “AI” has been used to refer to, both recently and historically.

      This emerging field is often hidden under the label AI, which makes it difficult to reason about.

    2. Thus, just as humans built buildings and bridges before there was civil engineering, humans are proceeding with the building of societal-scale, inference-and-decision-making systems that involve machines, humans and the environment. Just as early buildings and bridges sometimes fell to the ground — in unforeseen ways and with tragic consequences — many of our early societal-scale inference-and-decision-making systems are already exposing serious conceptual flaws.

      Analogous to the collapse of early bridges and building, before the maturation of civil engineering, our early society-scale inference-and-decision-making systems break down, exposing serious conceptual flaws.

    1. The Globe and Mail reports that Element AI sold for less than $500 million USD. This would place the purchase price well below the estimated valuation that the Montréal startup was said to have after its $200 million CAD Series B round in September 2019.

      This was a downround for them in a sense that eventhough they sold for USD$500M their post-money round in Sep 2019 was CAD$200M meaning that they did not improve on their valuation after one year. Why?

    2. Despite being seen as a leader and a rising star in the Canadian AI sector, Element AI faced difficulties getting products to market.

      They had faced productisastion problems, just like many other AI startups.It looks like they have GTM problems too,

    3. Element AI had more than 500 employees, including 100 PhDs.

      500 employees is indeed large. A 100-person team of PhDs is very large as well, They could probably tackle many difficult AI Problems!

    4. n 2017, the startup raised what was then a historic $137.5 million Series A funding round from a group of notable investors including Intel, Microsoft, National Bank of Canada, Development Bank of Canada (BDC), NVIDIA, and Real Ventures.

      This was indeed a historic amonunt raised! Probably because of Yoshua Bengio one of the god fathers of AI!

  6. Nov 2020
    1. AI is not analogous to the big science projects of the previous century that brought us the atom bomb and the moon landing. AI is a science that can be conducted by many different groups with a variety of different resources, making it closer to computer design than the space race or nuclear competition. It doesn’t take a massive government-funded lab for AI research, nor the secrecy of the Manhattan Project. The research conducted in the open science literature will trump research done in secret because of the benefits of collaboration and the free exchange of ideas.

      AI research is not analogous to space research or an arms race.

      It can be conducted by different groups with a variety of different resources. Research conducted in the open is likely to do better because of the benefits of collaboration.

  7. Oct 2020
    1. Facebook AI is introducing M2M-100, the first multilingual machine translation (MMT) model that can translate between any pair of 100 languages without relying on English data. It’s open sourced here. When translating, say, Chinese to French, most English-centric multilingual models train on Chinese to English and English to French, because English training data is the most widely available. Our model directly trains on Chinese to French data to better preserve meaning. It outperforms English-centric systems by 10 points on the widely used BLEU metric for evaluating machine translations. M2M-100 is trained on a total of 2,200 language directions — or 10x more than previous best, English-centric multilingual models. Deploying M2M-100 will improve the quality of translations for billions of people, especially those that speak low-resource languages. This milestone is a culmination of years of Facebook AI’s foundational work in machine translation. Today, we’re sharing details on how we built a more diverse MMT training data set and model for 100 languages. We’re also releasing the model, training, and evaluation setup to help other researchers reproduce and further advance multilingual models. 

      Summary of the 1st AI model from Facebook that translates directly between languages (not relying on English data)

  8. Sep 2020
  9. Aug 2020
  10. Jul 2020
  11. Jun 2020
    1. Google’s novel response has been to compare each app to its peers, identifying those that seem to be asking for more than they should, and alerting developers when that’s the case. In its update today, Google says “we aim to help developers boost the trust of their users—we surface a message to developers when we think their app is asking for a permission that is likely unnecessary.”
    1. 5A85F3

      I have signed up for hypothesis and verified my email so i can leave you this following comment:

      long time reader, first time poster here. greatest blog of all time

  12. May 2020
    1. Machine learning has a limited scope
    2. AI is a bigger concept to create intelligent machines that can simulate human thinking capability and behavior, whereas, machine learning is an application or subset of AI that allows machines to learn from data without being programmed explicitly
    1. Machine learning is an application of artificial intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed
    1. machines tend to be designed for the lowest possible risk and the least casualties

      why is this a problem?

    2. machines must weigh the consequences of any action they take, as each action will impact the end result
    3. goals of artificial intelligence include learning, reasoning, and perception
    4. refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions
  13. Apr 2020
    1. As the largest Voronoi regions belong to the states on the frontier of the search, this means that the tree preferentially expands towards large unsearched areas.
    2. inherently biased to grow towards large unsearched areas of the problem
    1. Natural Language Processing with Python – Analyzing Text with the Natural Language Toolkit Steven Bird, Ewan Klein, and Edward Loper
    1. How to setup and use Stanford CoreNLP Server with Python Khalid Alnajjar August 20, 2017 Natural Language Processing (NLP) Leave a CommentStanford CoreNLP is a great Natural Language Processing (NLP) tool for analysing text. Given a paragraph, CoreNLP splits it into sentences then analyses it to return the base forms of words in the sentences, their dependencies, parts of speech, named entities and many more. Stanford CoreNLP not only supports English but also other 5 languages: Arabic, Chinese, French, German and Spanish. To try out Stanford CoreNLP, click here.Stanford CoreNLP is implemented in Java. In some cases (e.g. your main code-base is written in different language or you simply do not feel like coding in Java), you can setup a Stanford CoreNLP Server and, then, access it through an API. In this post, I will show how to setup a Stanford CoreNLP Server locally and access it using python.
    1. CoreNLP includes a simple web API server for servicing your human language understanding needs (starting with version 3.6.0). This page describes how to set it up. CoreNLP server provides both a convenient graphical way to interface with your installation of CoreNLP and an API with which to call CoreNLP using any programming language. If you’re writing a new wrapper of CoreNLP for using it in another language, you’re advised to do it using the CoreNLP Server.
    1. Programming languages and operating systems Stanford CoreNLP is written in Java; recent releases require Java 1.8+. You need to have Java installed to run CoreNLP. However, you can interact with CoreNLP via the command-line or its web service; many people use CoreNLP while writing their own code in Javascript, Python, or some other language. You can use Stanford CoreNLP from the command-line, via its original Java programmatic API, via the object-oriented simple API, via third party APIs for most major modern programming languages, or via a web service. It works on Linux, macOS, and Windows. License The full Stanford CoreNLP is licensed under the GNU General Public License v3 or later. More precisely, all the Stanford NLP code is GPL v2+, but CoreNLP uses some Apache-licensed libraries, and so our understanding is that the the composite is correctly licensed as v3+.
    2. Stanford CoreNLP provides a set of human language technology tools. It can give the base forms of words, their parts of speech, whether they are names of companies, people, etc., normalize dates, times, and numeric quantities, mark up the structure of sentences in terms of phrases and syntactic dependencies, indicate which noun phrases refer to the same entities, indicate sentiment, extract particular or open-class relations between entity mentions, get the quotes people said, etc. Choose Stanford CoreNLP if you need: An integrated NLP toolkit with a broad range of grammatical analysis tools A fast, robust annotator for arbitrary texts, widely used in production A modern, regularly updated package, with the overall highest quality text analytics Support for a number of major (human) languages Available APIs for most major modern programming languages Ability to run as a simple web service
    1. OpenCV (Open Source Computer Vision Library) is an open source computer vision and machine learning software library. OpenCV was built to provide a common infrastructure for computer vision applications and to accelerate the use of machine perception in the commercial products. Being a BSD-licensed product, OpenCV makes it easy for businesses to utilize and modify the code. The library has more than 2500 optimized algorithms, which includes a comprehensive set of both classic and state-of-the-art computer vision and machine learning algorithms. These algorithms can be used to detect and recognize faces, identify objects, classify human actions in videos, track camera movements, track moving objects, extract 3D models of objects, produce 3D point clouds from stereo cameras, stitch images together to produce a high resolution image of an entire scene, find similar images from an image database, remove red eyes from images taken using flash, follow eye movements, recognize scenery and establish markers to overlay it with augmented reality, etc. OpenCV has more than 47 thousand people of user community and estimated number of downloads exceeding 18 million. The library is used extensively in companies, research groups and by governmental bodies. Along with well-established companies like Google, Yahoo, Microsoft, Intel, IBM, Sony, Honda, Toyota that employ the library, there are many startups such as Applied Minds, VideoSurf, and Zeitera, that make extensive use of OpenCV. OpenCV’s deployed uses span the range from stitching streetview images together, detecting intrusions in surveillance video in Israel, monitoring mine equipment in China, helping robots navigate and pick up objects at Willow Garage, detection of swimming pool drowning accidents in Europe, running interactive art in Spain and New York, checking runways for debris in Turkey, inspecting labels on products in factories around the world on to rapid face detection in Japan. It has C++, Python, Java and MATLAB interfaces and supports Windows, Linux, Android and Mac OS. OpenCV leans mostly towards real-time vision applications and takes advantage of MMX and SSE instructions when available. A full-featured CUDAand OpenCL interfaces are being actively developed right now. There are over 500 algorithms and about 10 times as many functions that compose or support those algorithms. OpenCV is written natively in C++ and has a templated interface that works seamlessly with STL containers.
    1. that can be partially automated but still require human oversight and occasional intervention
    2. but then have a tool that will show you each of the change sites one at a time and ask you either to accept the change, reject the change, or manually intervene using your editor of choice.
  14. Mar 2020
    1. Humans can no longer compete with AI in chess. They should not be without AI in litigation either.
    2. Just as chess players marshall their 16 chess pieces in a battle of wits, attorneys must select from millions of cases in order to present the best legal arguments.
    1. Now that we’re making breakthroughs in artificial intelligence, there’s a deeply cemented belief that the human brain works as a deterministic, mathematical process that can be replicated exactly by a Turing machine.
    1. Overestimating robots and AI underestimates the very people who can save us from this pandemic: Doctors, nurses, and other health workers, who will likely never be replaced by machines outright. They’re just too beautifully human for that.

      Yes - we used to have human elevator operators and telephone operators that would manually connect your calls. We now have automated check-out lines in stores and toll booths. In the future, we will have automated taxis and, yes, even some automated health care. Automated healthcare will enable better healthcare coverage with the same number of healthcare workers (or the same level of coverage with fewer workers). There can be good things or bad things about it - the way we do it will absolutely matter. We just need to think through how best to obtain the good without much of the bad ... rather than assuming it wont ever happen.

    2. the demand for products will keep climbing as well, as we’re seeing with this hiring bonanza.

      Probably not. The increase in demand is a result of the social-distancing and the hoarding. This is not a steady state. The demand for many things will return to normal (or below) once people figure out what they are using and what is still available. For example - you don't use that much more toilet paper when you are at home ... but you buy more if you don't know when it will be available again.

    3. Last week, Amazon officials announced that in response to the coronavirus they were hiring 100,000 additional humans to work in fulfillment centers and as delivery drivers, showing that not even this mighty tech company can do without people.

      Amazon has adopted automation in a very big and increasing way. Just because it has not automated everything yet, doesn't mean that complete automation isn't possible. We already know automated delivery is in the works. Amazon, Uber and Google are all working on the details of autonomous navigation ... and the ultimate result will absolutely impact future drivers (pun intended).

    4. Why haven’t the machines saved us yet?

      because machines don't buy tickets to fly on planes and vacation on cruise ships.

    5. And that’s all because of the vulnerabilities of the human worker.

      It has more to do with the vulnerabilities of the human traveler and the human guest (and less to do with the workers). The demand for these services has simply gone down while people try to avoid spreading the virus.

    1. The system has been criticised due to its method of scraping the internet to gather images and storing them in a database. Privacy activists say the people in those images never gave consent. “Common law has never recognised a right to privacy for your face,” Clearview AI lawyer Tor Ekeland said in a recent interview with CoinDesk. “It’s kind of a bizarre argument to make because [your face is the] most public thing out there.”
    1. Enligt Polismyndighetens riktlinjer ska en konsekvensbedömning göras innan nya polisiära verktyg införs, om de innebär en känslig personuppgiftbehandling. Någon sådan har inte gjorts för det aktuella verktyget.

      Swedish police have used Clearview AI without any 'consequence judgement' having been performed.

      In other words, Swedish police have used a facial-recognition system without being allowed to do so.

      This is a clear breach of human rights.

      Swedish police has lied about this, as reported by Dagens Nyheter.

    1. le nuove tecnologie sono presenti nella vita di tutti, sia lavorativa sia quotidiana. Spesso non ci rendiamo neanche conto che interagiamo con sistemi automatici o che disseminiamo sulla rete dati che riguardano la nostra identità personale. Per cui si produce una grave asimmetria tra chi li estrae (per i propri interessi) e chi li fornisce (senza saperlo). Per ottenere certi servizi, alcuni siti chiedono a noi di precisare che non siamo un robot, ma in realtà la domanda andrebbe capovolta
    2. «È necessario che l’etica accompagni tutto il ciclo della elaborazione delle tecnologie: dalla scelta delle linee di ricerca fino alla progettazione, la produzione, la distribuzione e l’utente finale. In questo senso papa Francesco ha parlato di “algoretica”»
    1. However, there is skepticism about AI’s ability to replace human teaching in activities such as judging writing style, and some have expressed concern that policy makers could use AI to justify replacing (young) human labor.

      Maha describes here the primary concern I have with the pursuit of both AI and adaptive technologies in education. Not that the designers of such tools are attempting to replace human interaction, but that the spread of "robotic" educational tools will accelerate the drive to further reduce human-powered teaching and learning, leading perhaps to class-based divisions in educational experiences like Maha imagines here.

      AI and adaptive tool designers often say that they are hoping their technologies will free up time for human teachers to focus on more impactful educational practices. However, we already see how technologies that reduce human labor often lead to further reductions the use of human teachers — not their increase. As Maha points out, that's a social and economic issue, not a technology issue. If we focus on building tools rather than revalorizing human-powered education, I fear we are accelerating the devaluation of education already taking place.

  15. Jan 2020
    1. Norbert Wiener was a mathematician with extraordinarily broad interests. The son of a Harvard professor of Slavic languages, Wiener was reading Dante and Darwin at seven, graduated from Tufts at fourteen, and received a PhD from Harvard at eighteen. He joined MIT's Department of Mathematics in 1919, where he remained until his death in 1964 at sixty-nine. In Ex-Prodigy, Wiener offers an emotionally raw account of being raised as a child prodigy by an overbearing father. In I Am a Mathematician, Wiener describes his research at MIT and how he established the foundations for the multidisciplinary field of cybernetics and the theory of feedback systems. This volume makes available the essence of Wiener's life and thought to a new generation of readers.

    1. Cut and erase artwork Transform your artwork by cutting and erasing content.
    2. Transform artwork Learn how to transform artwork with the Selection tool, Transform panel, and various transform tools.
    1. Scale objects Scaling an object enlarges or reduces it horizontally (along the x axis), vertically (along the y axis), or both. Objects scale relative to a reference point which varies depending on the scaling method you choose. You can change the default reference point for most scaling methods, and you can also lock the proportions of an object.
    1. The underlying guiding idea of a “trustworthy AI” is, first and foremost, conceptual nonsense. Machines are not trustworthy; only humans can be trustworthy (or untrustworthy). If, in the future, an untrustworthy corporation or government behaves unethically and possesses good, robust AI technology, this will enable more effective unethical behaviour.

      yikes

  16. Dec 2019
    1. Hans Moravec argued in 1976 that computers were still millions of times too weak to exhibit intelligence. He suggested an analogy: artificial intelligence requires computer power in the same way that aircraft require horsepower. Below a certain threshold, it's impossible, but, as power increases, eventually it could become easy.[79] With regard to computer vision, Moravec estimated that simply matching the edge and motion detection capabilities of human retina in real time would require a general-purpose computer capable of 109 operations/second (1000 MIPS).[80] As of 2011, practical computer vision applications require 10,000 to 1,000,000 MIPS. By comparison, the fastest supercomputer in 1976, Cray-1 (retailing at $5 million to $8 million), was only capable of around 80 to 130 MIPS, and a typical desktop computer at the time achieved less than 1 MIPS.
    1. This is not a new idea. It is based on the vision expounded by Vannevar Bush in his 1945 essay “As We May Think,” which conjured up a “memex” machine that would remember and connect information for us mere mortals. The concept was refined in the early 1960s by the Internet pioneer J. C. R. Licklider, who wrote a paper titled “Man-Computer Symbiosis,” and the computer designer Douglas Engelbart, who wrote “Augmenting Human Intellect.” They often found themselves in opposition to their colleagues, like Marvin Minsky and John McCarthy, who stressed the goal of pursuing artificial intelligence machines that left humans out of the loop.

      Seymour Papert, had an approach that provides a nice synthesis between these two camps, buy leveraging early childhood development to provide insights on the creation of AI.

    2. Thompson’s point is that “artificial intelligence” — defined as machines that can think on their own just like or better than humans — is not yet (and may never be) as powerful as “intelligence amplification,” the symbiotic smarts that occur when human cognition is augmented by a close interaction with computers.

      Intelligence amplification over artificial intelligence. In reality you can't get to AI until you've mastered IA.

    1. Four databases of citizen science and crowdsourcing projects —  SciStarter, the Citizen Science Association (CSA), CitSci.org, and the Woodrow Wilson International Center for Scholars (the Wilson Center Commons Lab) — are working on a common project metadata schema to support data sharing with the goal of maintaining accurate and up to date information about citizen science projects.  The federal government is joining this conversation with a cross-agency effort to promote citizen science and crowdsourcing as a tool to advance agency missions. Specifically, the White House Office of Science and Technology Policy (OSTP), in collaboration with the U.S. Federal Community of Practice for Citizen Science and Crowdsourcing (FCPCCS),is compiling an Open Innovation Toolkit containing resources for federal employees hoping to implement citizen science and crowdsourcing projects. Navigation through this toolkit will be facilitated in part through a system of metadata tags. In addition, the Open Innovation Toolkit will link to the Wilson Center’s database of federal citizen science and crowdsourcing projects.These groups became aware of their complementary efforts and the shared challenge of developing project metadata tags, which gave rise to the need of a workshop.  

      Sense Collective's Climate Tagger API and Pool Party Semantic Web plug-in are perfectly suited to support The Wilson Center's metadata schema project. Creating a common metadata schema that is used across multiple organizations working within the same domain, with similar (and overlapping) data and data types, is an essential step towards realizing collective intelligence. There is significant redundancy that consumes limited resources as organizations often perform the same type of data structuring. Interoperability issues between organizations, their metadata semantics and serialization methods, prevent cumulative progress as a community. Sense Collective's MetaGrant program is working to provide a shared infastructure for NGO's and social impact investment funds and social impact bond programs to help rapidly improve the problems that are being solved by this awesome project of The Wilson Center. Now let's extend the coordinated metadata semantics to 1000 more organizations and incentivize the citizen science volunteers who make this possible, with a closer connection to the local benefits they produce through their efforts. With integration into Social impact Bond programs and public/private partnerships, we are able to incentivize collective action in ways that match the scope and scale of the problems we face.