- Apr 2019
- Feb 2019
In amplifying our intelligence, we are applying the principle of synergistic structuring that was followed by natural evolution in developing the basic human capabilities. What we have done in the development of our augmentation means is to construct a superstructure that is a synthetic extension of the natural structure upon which it is built. In a very real sense, as represented by the steady evolution of our augmentation means, the development of "artificial intelligence" has been going on for centuries.
Engelbart explicitly noted that what he was trying to do was not just hack culture, which is what significant innovations accomplish, but to hack the process by which biological and cultural co-evolution has bootstrapped itself to this point. Culture used the capabilities provided by biological evolution -- language, thumbs, etc. -- to improve human ways of living much faster than biological evolution can do, by not just inventing, but passing along to each other and future generations the knowledge of what was invented and how to invent. Engelbart proposes an audio-visual-tactile interface to computing as a tool for consciously accelerating the scope and power of individual and collective intelligence.
Our culture has evolved means for us to organize the little things we can do with our basic capabilities so that we can derive comprehension from truly complex situations, and accomplish the processes of deriving and implementing problem solutions. The ways in which human capabilities are thus extended are here called augmentation means, and we define four basic classes of them: 2a4 Artifacts—physical objects designed to provide for human comfort, for the manipulation of things or materials, and for the manipulation of symbols.2a4a Language—the way in which the individual parcels out the picture of his world into the concepts that his mind uses to model that world, and the symbols that he attaches to those concepts and uses in consciously manipulating the concepts ("thinking"). 2a4b Methodology—the methods, procedures, strategies, etc., with which an individual organizes his goal-centered (problem-solving) activity. 2a4c Training—the conditioning needed by the human being to bring his skills in using Means 1, 2, and 3 to the point where they are operationally effective. 2a4d The system we want to improve can thus be visualized as a trained human being together with his artifacts, language, and methodology. The explicit new system we contemplate will involve as artifacts computers, and computer-controlled information-storage, information-handling, and information-display devices. The aspects of the conceptual framework that are discussed here are primarily those relating to the human being's ability to make significant use of such equipment in an integrated system.
To me, this is the most prescient of Engelbart's future visions, and the seed for future study of culture-technology co-evolution. I talked with Engelbart about this passage over the years and we agreed that although the power of the artifacts, from RAM to CPU speed to network bandwidth, had improved by the billionfold since 1962, the "softer" parts of the formula -- the language, methodology, and training -- have not advanced so much. Certainly language, training methods and pedagogy, and collaborative strategies have evolved with the growth and spread of digital media, but are still lagging. H/LAMT interests me even more today than it did thirty years ago because Engelbart unknowingly forecast the fundamental elements of what has come to be called cultural-biological co-evolution. I gave a TED talk in 2005, calling for an interdisciplinary study of human cooperation -- and obstacles to cooperation. It seems that in recent years an interdisciplinary understanding has begun to emerge. Joseph Henrich at Harvard, for one, in his recent book, The Secret of Our Success, noted:
Drawing insights from lost European Explorers, clever chimpanzees, hunter-gatherers, cultural neuroscience, ancient bones and the human genome, Henrich shows that it’s not our general intelligence, innate brain power, or specialized mental abilities that explain our success. Instead, it’s our collective brains, which arise from a combination of our ability to learn selectively from each and our sociality. Our collective brains, which often operate outside of any individual’s conscious awareness, gradually produce increasingly complex, nuanced and subtle technological, linguistic and social products over generations.
Tracking this back into the mist of our evolutionary past, and to the remote corners of the globe, Henrich shows how this non-genetic system of cultural inheritance has long driven human genetic evolution. By producing fire, cooking, water containers, tracking know-how, plant knowledge, words, hunting strategies and projectiles, culture-driven genetic evolution expanded our brains, shaped our anatomy and physiology, and influenced our psychology, making us into the world’s only living cultural species. Only by understanding cultural evolution, can we understand human genetic evolution.
Henrich, Boyd, and RIcherson wrote, about the social fundamentals that distinguish human culture's methods of evolving collective intelligence in The Origin and Evolution of Culture:
Surely, without punishment, language, technology, individual intelligence and inventiveness, ready establishment of reciprocal arrangements, prestige systems and solutions to games of coordination, our societies would take on a distinctly different cast. Thus, a major constraint on explanations of human sociality is its systemic structure
- Aug 2018
Capacity can also affect crisis potential through staffing decisions that affect the diversity of acts that are available. Enactment is labour-intensive, which means understaffing has serious effects.
Diverse labor force is also a central principle of effective crowdsourcing and collective intelligence.
"... groups of individuals doing things collectively that seem intelligent.” 
Collective intelligence definition.
Per the authors, "collective intelligence is a superset of social computing and crowdsourcing, because both are defined in terms of social behavior."
Collective intelligence is differentiated from human computation because the latter doesn't require a group.
It is differentiated from crowdsourcing because it doesn't require a public crowd and it can happen without an open call.
hus it becomes possible to see how ques-tions around data use need to shift from asking what is in the data, to include discussions of how the data is structured, and how this structure codifies value systems and social practices, subject positions and forms of visibility and invisi-bility (and thus forms of surveillance), along with the very ideas of crisis, risk governance and preparedness. Practices around big data produce and perpetuate specific forms of social engagement as well as understandings of the areas affected and the people being served.
How data structure influences value systems and social practices is a much-needed topic of inquiry.
Big data is not just about knowing more. It could be – and should be – about knowing better or about changing what knowing means. It is an ethico- episteme-ontological- political matter. The ‘needle in the haystack’ metaphor conceals the fact that there is no such thing as one reality that can be revealed. But multiple, lived are made through mediations and human and technological assemblages. Refugees’ realities of intersecting intelligences are shaped by the ethico- episteme-ontological politics of big data.
Big, sweeping statement that helps frame how big data could be better conceptualized as a complex, socially contextualized, temporal artifact.
Burns (2015) builds on this to investigate how within digital humanitarianism discourses, big data produce and perform subjects ‘in need’ (individuals or com-munities affected by crises) and a humanitarian ‘saviour’ community that, in turn, seeks answers through big data
I don't understand what Burns is arguing here. Who is he referring to claims that DHN is a "savior" or "the solution" to crisis response?
"Big data should therefore be be conceptualized as a framing of what can be known about a humanitarian crisis, and how one is able to grasp that knowledge; in short, it is an epistemology. This epistemology privileges knowledges and knowledge- based practices originating in remote geographies and de- emphasizes the connections between multiple knowledges.... Put another way, this configuration obscures the funding, resource, and skills constraints causing imperfect humanitarian response, instead positing volunteered labor as ‘the solution.’ This subjectivity formation carves a space in which digital humanitarians are necessary for effective humanitarian activities." (Burns 2015: 9–10)
Crises are often not a crisis of information. It is often not a lack of data or capacity to analyse it that prevents ‘us’ from pre-venting disasters or responding effectively. Risk management fails because there is a lack of a relational sense of responsibility. But this does not have to be the case. Technologies that are designed to support collaboration, such as what Jasanoff (2007) terms ‘technologies of humility’, can be better explored to find ways of framing data and correlations that elicit a greater sense of relational responsibility and commitment.
Is it "a lack of relational sense of responsibility" in crisis response (state vs private sector vs public) or is it the wicked problem of power, class, social hierarchies, etc.?
"... ways of framing data and correlations that elicit a greater sense of responsibility and commitment."
That could have a temporal component to it to position urgency, timescape, horizon, etc.
In some ways this constitutes the production of ‘liquid resilience’ – a deflection of risk to the individuals and communities affected which moves us from the idea of an all-powerful and knowing state to that of a ‘plethora of partial projects and initiatives that are seeking to harness ICTs in the service of better knowing and governing individuals and populations’ (Ruppert 2012: 118)
This critique addresses surveillance state concerns about glue-ing datasets together to form a broader understanding of aggregate social behavior without the necessary constraints/warnings about social contexts and discontinuity between data.
Skimmed the Ruppert paper, sadly doesn't engage with time and topologies.
Indeed, as Chandler (2015: 9) also argues, crowdsourcing of big data does not equate to a democratisation of risk assessment or risk governance:
Beyond this quote, Chandler (in engaging crisis/disaster scenarios) argues that Big Data may be more appropriately framed as community reflexive knowledge than causal knowledge. That's an interesting idea.
*"Thus, It would be more useful to see Big Data as reflexive knowledge rather than as causal knowledge. Big Data cannot help explain global warming but it can enable individuals and household to measure their own energy consumption through the datafication of household objects and complex production and supply chains. Big Data thereby datafies or materialises an individual or community’s being in the world. This reflexive approach works to construct a pluralised and multiple world of self-organising and adaptive processes. The imaginary of Big Data is that the producers and consumers of knowledge and of governance would be indistinguishable; where both knowing and governing exist without external mediation, constituting a perfect harmonious and self-adapting system: often called ‘community resilience’. In this discourse, increasingly articulated by governments and policy-makers, knowledge of causal connections is no longer relevant as communities adapt to the real-time appearances of the world, without necessarily understanding them."
"Rather than engaging in external understandings of causality in the world, Big Data works on changing social behaviour by enabling greater adaptive reflexivity. If, through Big Data, we could detect and manage our own biorhythms and know the effects of poor eating or a lack of exercise, we could monitor our own health and not need costly medical interventions. Equally, if vulnerable and marginal communities could ‘datafy’ their own modes of being and relationships to their environments they would be able to augment their coping capacities and resilience without disasters or crises occurring. In essence, the imaginary of Big Data resolves the essential problem of modernity and modernist epistemologies, the problem of unintended consequences or side-effects caused by unknown causation, through work on the datafication of the self in its relational-embeddedness.42 This is why disasters in current forms of resilience thinking are understood to be ‘transformative’: revealing the unintended consequences of social planning which prevented proper awareness and responsiveness. Disasters themselves become a form of ‘datafication’, revealing the existence of poor modes of self-governance."*
Downloaded Chandler paper. Cites Meier quite a bit.
ut Burns finds that humanitarian staff often describe the local communities and ‘crowds’ as the ‘eyes, ears and sensors’ of UN staff, which does not index a genuine collaborative relationship. He states: ‘In all these cases, the discourse talks of putting local people “in the driving seat” when in reality the direction of the journey has already been decided’ (Burns 2015: 48). Burns (2015: 42) also notes that this leads to a transformation of social responsibility into individual responsibility.Neoliberalism’s promotion of free market norms is therefore much more than the simple ideology of free market economics. It is a specific form of social rule that institutionalises a rationality of competition, enterprise indi-vidualised responsibility. Although the state ‘steps back’ and encourages the free conduct of individuals, this is achieved through active intervention into civil society and the opening up of new areas to the logic of private enter-prise and individual initiative. This is the logic behind the rise of resilience
Burns criticism of humanitarian response as not truly collaborative and an abdication of the state's responsibility for social welfare to the private sector.
The UNHCR has even called for the refugees themselves to also develop their own data solutions and ideas (see Palmer 2014) as a way to help build their ideologies into the data infrastructures and thus bring their prisms into view. This could create a richer situational awareness and a better ability to understand and deal with unfolding and future crises by supporting resilient communities through giving them the means of data producing and sharing
Participatory-design and community-centered design could be very helpful in this regard but this argument seems overstated.
Evokes concerns about "distant suffering" (see: Chouliaraki, 2008): Who gets to share? What community? Refugees are not homogeneous.
Doing so switches the discourse from vulnerability, where there is a need for external protection mobilised from above to come in and rescue the refugees, to one of resilience, where self- sufficiency and autonomy are part of the equation (Meier 2013).
The dichotomy between state-led response vs community-coordinated response as the only ways to deliver aid seems unnecessarily limited.
It can be both and other models/new ideas.
Conflict- and persecution-driven humanitarian needs are often rife with complexity and receive scant attention outside of the humanitarian INGO sector.
Yet, at the same time as power is exercised by both the state and corporations, power is gathering from the bottom up in new ways. In disaster response, a dynamic interplay between publics and experts is captured by the concept of social collective intelligence (Büscher et al. 2014); a disruptive innovative force that is challenging the social, economic, political and organisational practices that shape disaster response.
Cited paper references social media and DHN work.
Since the data is already being collected on a regular basis by ubiquitous private firms, it is thought to contain information that will increase opportunities for intelligence gathering and thereby security. This marks a shift from surveillance to ‘dataveillance’ (van Dijck 2014), where the impetus for data processing is no longer motivated by specific purposes or suspicions, but opportunistic discovery of anomalies that can be investigated. For crisis management this could mean benefits such as richer situation awareness, increased capacity for risk assess-ment, anticipation and prediction, as well as more agile response
The supposed benefits for crisis management don't correspond to the earlier criticisms about data quality, loss of contextualization, and predictive analytics accuracy.
The following paragraph clears up some of the overly optimistic promises. Perhaps this section is simply overstated for rhetorical purposes.
lthough Snowden’s revelations shocked the world and prompted calls for a public debate on issues of privacy and transparency
I understand the desire to use a topical hook to explain a complex topic but referring to the highly contentious Snowden scandal as a frame seems risky (alienating) and could potentially undermine an important argument about the surveillance state should new revelations be revealed about his motives/credibility.
While seemingly avoiding the traps of exerting top- down power over people the state does not yet have formal control over, and simultaneously providing support for self- determination and choice to empower individuals for self- sufficiency rather than defining them as vulnerable and passive recipients of top- down protection (Meier 2013), tying individual aid to mobile tracking puts refugees in a situation where their security is dependent upon individual choice and the private sector. Apart from disrupting traditional dynamics of responsibility for aid and protection, public–private sharing of intel-ligence brings new forms of dataveillance
If the goal is to improve rapid/efficient response to those in need, is it necessarily only a dichotomy of top-down institutional action vs private sector/market-driven reaction? Surely, we can do better than this.
Data/predictive analytics abuses by the private sector are legion.
How does social construction vs technological determinism fit here? In what ways are the real traumas suffered by crisis-affected people not being taken into account during the response/relief/resiliency phases?
However, with these big data collections, the focus becomes not the individu-al’s behaviour but social and economic insecurities, vulnerabilities and resilience in relation to the movement of such people. The shift acknowledges that what is surveilled is more complex than an individual person’s movements, communica-tions and actions over time.
The shift from INGO emergency response/logistics to state-sponsored, individualized resilience via the private sector seems profound here.
There's also a subtle temporal element here of surveilling need and collecting data over time.
Again, raises serious questions about the use of predictive analytics, data quality/classification, and PII ethics.
Andrejevic and Gates (2014: 190) suggest that ‘the target becomes the hidden patterns in the data, rather than particular individuals or events’. National and local authorities are not seeking to monitor individuals and discipline their behaviour but to see how many people will reach the country and when, so that they can accommodate them, secure borders, and identify long- term social out-looks such as education, civil services, and impacts upon the host community (Pham et al. 2015).
This seems like a terribly naive conclusion about mass data collection by the state.
"Yet even if capacities to analyse the haystack for needles more adequately were available, there would be questions about the quality of the haystack, and the meaning of analysis. For ‘Big Data is not self-explanatory’ (Bollier 2010: 13, in boyd and Crawford 2012). Neither is big data necessarily good data in terms of quality or relevance (Lesk 2013: 87) or complete data (boyd and Crawford 2012)."
as boyd and Crawford argue, ‘without taking into account the sample of a data set, the size of the data set is meaningless’ (2012: 669). Furthermore, many tech-niques used by the state and corporations in big data analysis are based on probabilistic prediction which, some experts argue, is alien to, and even incom-prehensible for, human reasoning (Heaven 2013). As Mayer-Schönberger stresses, we should be ‘less worried about privacy and more worried about the abuse of probabilistic prediction’ as these processes confront us with ‘profound ethical dilemmas’ (in Heaven 2013: 35).
Primary problems to resolve regarding the use of "big data" in humanitarian contexts: dataset size/sample, predictive analytics are contrary to human behavior, and ethical abuses of PII.
Second, this tracking and tracing of refugees has become a deeply ambiguous process in a world riven by political conflict, where ‘migration’ increasingly comes to be discussed in co- location with terrorism.
Data collection process for refugees is underscored as threat surveillance, whether it is intended or not.
Surveillance studies have tracked a shift from discipline to control (Deleuze 1992; Haggerty and Ericson 2000; Lyon 2014) exemplified by the shift from monitoring confined populations (through technologies such as the panopticon) to using new technologies to keep track of mobile populations.
Design implication for ICT4D and ICT for humanitarian response -- moving beyond controlled environment surveillance to ubiquitous and omnipresent.
As Coyle and Meier (2009) argue, disasters are often seen as crises of information where it is vital to make sure that people know where to find potable water, how to ask for help, where their relatives are, or if their home is at risk; as well as providing emergency response and human-itarian agencies with information about affected populations. Such a quest for information for ‘security’, in turn, provides fertile ground for a quest for technological solutions, such as big data, which open up opportunities for the extended surveillance of everyday life. The assumption is that if only enough information could be gathered and exchanged, preparedness, resilience and control would follow. This is particularly pertinent with regard to mobile pop-ulations (Adey and Kirby 2016)
The Information is Aid perspective that drives my research agenda.
hird, at this juncture, control is being equated with visibility and visibility with personal security. But how these individuals are made visible matters for both privacy and security, let alone the politics of conflating refugees, migration and terrorism. Indeed, working with specific data framing mechanisms affects how the causes and effects of disasters are identified and what elements and people are considered (Frickel 2008
A finer point on threat surveillance that stems from how classifications and categories are framed.
This also gets at post-colonial interpretations of people, places, and events.
See: Winner, Do Artifacts Have Politics? See: Bowker and Star, Sorting things out: Classification and its consequences. See: Irani, Post-Colonial Computing
First, there is a double dynamic to the generation of data in the refugee crisis.
Data is used by the state to mobilize resources for protective services (border management and immigration/asylum systems) and data is used to count/track refugees in order to provision assistance.
Datafication refers to the fact that ‘we can now capture and calculate at a much more comprehensive scale the physical and intangible aspects of existence and act on them’ (Mayer- Schönberger and Cukier 2013: 97
It also incorporates metadata as well as information gleaned from typical sources.
There is an uneasy coming together of diverse computational and human intelligences in these intersections, and the ambiguous nature of intelligence – understood, on the one hand, as a capacity for perceiving, learning and under-standing and, on the other, as information obtained for strategic purposes – marks complex relationships between ‘good’ and ‘dark’ aspects of big data, surveil-lance and crisis management.
The promise and peril of gathering collective intelligence, surveillance, and capturing big data during humanitarian crises.
- threat surveillance
- social coordination
- big data
- social media
- collective intelligence
- distant suffering
- information is aid
- design implication
Peer production successfully elicits contributions from diverse individu-als with diverse motivations – a quality that continues to distinguish it fromsimilar forms of collective intelligence
Benkler makes a really bold statement here about how peer production differs from collective intelligence. Not sure I buy this argument.
Brabner on crowdsourcing:
Although peer production is central to social scientific and legal researchon collective intelligence, not all examples of collective intelligence created inonline systems are peer production. First, (1) collective intelligence can in-volve centralized control over goal-setting and execution of tasks.
Not all collective intelligence is peer production.
Peer production must adhere to values: de-centralized control, broad range of motives/incentives and FLOSS/creative commons rights.
Consistent with this exam-ple, foundational social scientific research relevant to understanding collec-tive intelligence has focused on three central concerns: (1) explaining the or-ganization and governance of decentralized projects, (2) understanding themotivation of contributors in the absence of financial incentives or coerciveobligations, and (3) evaluating the quality of the products generated throughcollective intelligence systems.
Focus of related work in collective intelligence studies:
• organizational governance • motives • product quality
Historically,researchers in diverse fields such as communication, sociology, law, and eco-nomics have argued that effective human systems organize people through acombination of hierarchical structures (e.g., bureaucracies), completely dis-tributed coordination mechanisms (e.g., markets), and social institutions ofvarious kinds (e.g., cultural norms). However, the rise of networked systemsand online platforms for collective intelligence has upended many of the as-sumptions and findings from this earlier research.
Benkler argues that the process, motives, and cultural norms of online network-driven knowledge work are different than systems previously studied and should be re-evaluated.
- Jul 2018
- Jun 2018
"When tasks require high coordination because the work is highly interdependent, having more contributors can increase process losses, reducing the effectiveness of the group below what individual members could optimally accomplish". Having a team too large the overall effectiveness may suffer even when the extra contributors increase the resources. In the end the overall costs from coordination might overwhelm other costs.
Games such as The Sims Series, and Second Life are designed to be non-linear and to depend on collective intelligence for expansion. This way of sharing is gradually evolving and influencing the mindset of the current and future generations. For them, collective intelligence has become a norm.
Research performed by Tapscott and Williams has provided a few examples of the benefits of collective intelligence to business: Talent utilization At the rate technology is changing, no firm can fully keep up in the innovations needed to compete. Instead, smart firms are drawing on the power of mass collaboration to involve participation of the people they could not employ. This also helps generate continual interest in the firm in the form of those drawn to new idea creation as well as investment opportunities. Demand creation Firms can create a new market for complementary goods by engaging in open source community. Firms also are able to expand into new fields that they previously would not have been able to without the addition of resources and collaboration from the community. This creates, as mentioned before, a new market for complementary goods for the products in said new fields. Costs reduction Mass collaboration can help to reduce costs dramatically. Firms can release a specific software or product to be evaluated or debugged by online communities. The results will be more personal, robust and error-free products created in a short amount of time and costs. New ideas can also be generated and explored by collaboration of online communities creating opportunities for free R&D outside the confines of the company.
To address the problems of serialized aggregation of input among large-scale groups, recent advancements collective intelligence have worked to replace serialized votes, polls, and markets, with parallel systems such as "human swarms" modeled after synchronous swarms in nature.
While modern systems benefit from larger group size, the serialized process has been found to introduce substantial noise that distorts the collective output of the group. In one significant study of serialized collective intelligence, it was found that the first vote contributed to a serialized voting system can distort the final result by 34%
To accommodate this shift in scale, collective intelligence in large-scale groups been dominated by serialized polling processes such as aggregating up-votes, likes, and ratings over time
The idea of collective intelligence also forms the framework for contemporary democratic theories often referred to as epistemic democracy.
The basis and goal of collective intelligence is mutual recognition and enrichment of individuals rather than the cult of fetishized or hypostatized communities."
Collective intelligence (CI) is shared or group intelligence that emerges from the collaboration, collective efforts, and competition of many individuals and appears in consensus decision making.
- Oct 2017
- Dec 2016
What Drives Successful Crowdsourcing?
An article relevant to intelligent democracy, because it shows that opensource-like collaboration often out performs corporate of governmental organizations at solving problems well.
- Nov 2016
Microsoft Confirms Its Chinese-Language Chatbot Filters Certain Topics
What has humanity come to -- Micrsoft, one of America's leading computer companies, is selling China network AI's that help enforce Communist censorship?
We cannot stop other countries from censoring their net, and we should let American vendors sell into nets censored by local law. But -- if we care for democracy and anything even approaching America's current relatively wide equality of power between humans -- we better show the world that a nation with a substantially uncensored, AI-amplified, democratic network can kick butt. Otherwise a democratic human future in the rapidly approaching age of machine superintelligence looks unlikely.
So let's use collective media, collective intelligence, and a basic set of collective human values to create an AI amplified superdemocracy.
The AI singularity is rapidly approaching. Machines that can do virtually everything humans do much faster and better than humans are probably only 5 to 15 years away. If you care at all for anybody, including yourself, that may be living then -- it is important for you that machine superintelligence happen well for humanity. As I wrote in 2010 in an article entitled "Collective Intelligence -- Our Only Hope for Surviving the Singularity":
"The Singularity will NOT occur in a vacuum.
"It will NOT occur in a realm of pure science, engineering, or philosophy. It will NOT occur in one instant, one year, or one decade.
"Instead, it WILL occur --- over multiple decades --- in the real world --- one dominated by struggles for --- personal --- corporate --- political --- and national --- survival, money, and power. How the Singularity's wildly transformative technologies will be developed and deployed will be decided largely by collective entities --- by corporations --- governments --- political parties --- militaries --- bureaucracies --- interest groups --- criminal gangs --- the media --- and public opinion."
So -- if we want the future to be anything other than a sci-fi horror movie -- job one should be using the net and AI to make America and, ultimately humanity, collectively superintelligent.