menu

Augmented CI and Human-Driven AI: How the Intersection of Artificial Intelligence and Collective Intelligence Could Enhance Their Impact on Society

Stefaan Verhulst — November 08, 2017

Toward a Research Agenda

As the technology, research and policy communities continue to seek new ways to improve governance and solve public problems, two new types of assets are occupying increasing importance: data and people. Leveraging data and people’s expertise in new ways offers a path forward for smarter decisions, more innovative policymaking, and more accountability in governance. Yet, unlocking the value of these two assets not only requires increased availability and accessibility (through, for instance, open data or open innovation), it also requires innovation in methodology and technology.

The first of these innovations involves Artificial Intelligence (AI). AI offers unprecedented abilities to quickly process vast quantities of data that can provide data-driven insights to address public needs. This is the role it has for example played in New York City, where FireCast, leverages data from across the city government to help the Fire Department identify buildings with the highest fire risks. AI is also considered to improve education, urban transportation, humanitarian aid and combat corruption, among other sectors and challenges.

The second area is Collective Intelligence (CI). Although it receives less attention than AI, CI offers similar potential breakthroughs in changing how we govern, primarily by creating a means for tapping into the “wisdom of the crowd” and allowing groups to create better solutions than even the smartest experts working in isolation could ever hope to achieve. For example, in several countries patients’ groups are coming together to create new knowledge and health treatments based on their experiences and accumulated expertise. Similarly, scientists are engaging citizens in new ways to tap into their expertise or skills, generating citizen science – ranging from mapping our solar system to manipulating enzyme models in a game-like fashion.

Neither AI nor CI offer panaceas for all our ills; they each pose certain challenges, and even risks. The effectiveness and accuracy of AI relies substantially on the quality of the underlying data as well as the human-designed algorithms used to analyze that data. Among other challenges, it is becoming increasingly clear how biases against minorities and other vulnerable populations can be built into these algorithms. For instance, some AI-driven platforms for predicting criminal recidivism significantly over-estimate the likelihood that black defendants will commit additional crimes in comparison to white counterparts. (For more examples, see our Selected Readings on Algorithmic Scrutiny).

In theory, CI avoids some of the risks of bias and exclusion because it is specifically designed to bring more voices into a conversation. But ensuring that that multiplicity of voices adds value, not just noise, can be an operational and ethical challenge. As it stands, identifying the signal in the noise in CI initiatives can be time-consuming and resource intensive, especially for smaller organizations or groups lacking resources or technical skills.

Despite these challenges, however, there exists a significant degree of optimism surrounding both these new approaches to problem solving. Some of this is hype, but some of it is merited—CI and AI do offer very real potential, and the task facing policymakers, practitioners and researchers is to find ways of harnessing that potential in a way that maximizes benefits while limiting possible harms.

In what follows, I argue that the solution to the challenge described above may involve a greater interaction between AI and CI. These two areas of innovation have largely evolved and been researched separately until now. However, I believe that there is substantial scope for integration, and mutual reinforcement. It is when harnessed together, as complementary methods and approaches, that AI and CI can bring the full weight of technological progress and modern data analytics to bear on our most complex, pressing problems.

To deconstruct that statement, I propose three premises (and subsequent set of research questions) toward establishing a necessary research agenda on the intersection of AI and CI that can build more inclusive and effective approaches to governance innovation.

AI Meets CI

Premise I: Toward Augmented Collective Intelligence: AI will enable CI to scale While CI is built around the idea that groups of citizens or experts can be smarter and more effective than individuals, scaling CI initiatives can be difficult. This is largely because of the transaction costs involved in CI. Unlike the more automated processes of AI, CI typically involves a substantial degree of human effort in curating, inviting and enabling participation by particular groups of individuals or institutions. In addition, significant effort can be required to triage signal from noise. All of this means that CI can be fairly labor intensive, and hard to automate.

Could the “superhuman” capabilities represented by AI help to optimize human-driven CI initiatives and overcome some of these scaling challenges? If implemented effectively, automation through AI could indeed save time and effort, leading to what we call “Augmented Collective Intelligence”. Consider, for instance, the Notice and Comment (N&C) platform that not only enables comments on policy proposals but, more importantly, seeks to generate insights from the comments received to inform regulatory decision-making by leveraging an AI tool called Regendus. Similarly, the paradigmatic, yet ailing, example of collective intelligence, Wikipedia, has started to use AI bots to help edit articles, identify and clean up vandalism, categorize and tag.

Besides automation, AI can also help by identifying communities with something relevant and valuable to offer CI initiatives. In addition, techniques like sentiment analysis can also lower the burden of those charged with acting upon the CI-generated insights; such techniques can at least partially automate and generally improve the processes of gauging, analyzing and even acting upon inputs received from participants in CI initiatives.

Questions for Further Research:

  • How can AI scale CI? What attributes of AI can make a difference?
  • How can we ensure that the introduction of AI into CI initiatives does not introduce new biases or increase the risk of making decisions based on bad data?
  • How can we ensure that wide, diverse audiences are able to participate in AI-led Collective Intelligence initiatives?
  • What use cases could act as testbeds for further experimentation into Augmented Collective Intelligence?
  • What metrics of success could help determine the impact (whether positive or negative) of Augmented Collective Intelligence efforts?

Premise II: Toward Human-Driven Artificial Intelligence: CI will humanize AI Much of the concern surrounding the expansion and evolution of AI revolves around its perceived “inhumanity.” Dystopian scenarios examine the possibility of a “robot takeover,” and, less fantastically, an increasingly wide range of areas with significant impact (for example, military decision-making, banking and even driving) are undergoing a somewhat discomfiting human-to-machine transition, with unpredictable consequences. AI is a black box. Its very advantage (the ability to make decisions beyond the reach of humans) is also cause for concern. Despite calls for greater algorithmic transparency, the fact remains that even the creators of AI algorithms do not understand the actions or results thrown up by their creations (as evidenced, for instance, by recent “misbehaving” conversation bots).

CI has a potentially valuable role to play here, too. For example, introducing a human element into AI through coordinated CI efforts could help surface biases embedded in datasets and demystify the analytics performed on those datasets – one of the rationales behind a commercial service called CrowdFlower. CI could also increase the legitimacy of AI initiatives through a collaborative design process (which can itself be vetted by CI) to ensure that AI interventions that raise ethical or other concerns are developed carefully, or not at all. MoralMachine, for instance, gathers “human perspective on moral decisions made by machine intelligence”. More generally, introducing a human element to AI could increase the legitimacy of AI in the public eye, and mitigate some of the emerging concerns and opposition surrounding the field.

These are just some of the ways in which CI can help mitigate the risks posed by AI. Interestingly, emerging research suggests that CI may have a role to play not only in increasing the ethical legitimacy of AI, but even AI’s effectiveness. Researchers at MIT, for instance, have experimented with using crowdsourced expertise to identify the main “features” of big data sets; in this way, CI is the first point of entry into data, which is then considered more deeply using traditional, automated AI. Similarly, other researchers have been applying social techniques of learning used by humans to create more intelligent neurons in AI algorithms; the effort is to make individual neurons learn from each other in much the same way that humans use social and cultural contexts to make decisions. Both of these examples suggest that, in addition to increasing legitimacy and trust, CI may in fact have a role to play in enhancing the capabilities of AI—much in the same way AI can help scale up CI (see previous section).

Questions for Further Research:

  • Can CI legitimize AI processes by bringing in a more human element, and, if so, to what extent does that legitimizing function improve upon other pathways – e.g., expert review panels for AI algorithms?
  • What strategies could allow for a collective governance process to minimize the power asymmetries created by AI?
  • Can CI offer greater ability to understand and explain the decision-making behind and outcomes of AI processes?
  • Can CI play a role in integrating values and ethics into the design and functioning of AI processes?
  • What metrics of success could help to determine whether or not Augmented Collective Intelligence efforts make a difference?

Premise III: Open Governance will drive a blurring between AI and CI For different reasons, open governance – understood as processes for more equitable and participatory pathways in decision making and problem solving – can appear to be at odds with both AI and CI. AI initiatives can be biased, closed and opaque. Similarly, CI efforts can be driven by narrowly defined communities, with the result that traditionally marginalized or disenfranchised groups (which may in fact possess relevant expertise) can be excluded. Both these scenarios run counter to the openness principle at the core of open governance.

I believe that, rather than acting in opposition, the methods and values of open governance can in fact be embedded into both AI and CI, in the process helping both these innovations move closer together. For example, efforts to introduce greater openness into AI are likely to lead to more integration with CI (which, as we describe above, can help increase the inclusiveness and transparency of AI initiatives). Likewise, as CI seeks to move beyond limited communities of participation, AI can play an essential role in selecting actors and stakeholders that may widen the collective conversation while at the same time ensuring the relevance of their inputs and a high signal-to-noise ratio. In these ways, we believe that open governance and its underlying principles has a valuable role to play in the efforts to both strengthen AI and CI, as well as in bringing these two strands of innovation closer together.

Questions for Further Research:

  • How can the values embodied in open governance be integrated into the design of AI-meets-CI initiatives?
  • How can AI and CI be used in concert to create more active citizenship?
  • How can open government principles address ethical concerns surrounding both AI and CI?
  • Do we need new institutions to help push forward new governance approaches that may emerge from joint AI- CI initiatives?
  • Is there a need for an AI meets CI Magna Carta articulating principles around risk management, redress systems, and accountability; duties of institutions across the AI/CI value chain; and prohibitions related to the expanded use of AI and/or CI?
  • How compatible (if at all) are the metrics of success for, respectively, AI, CI and open government?

The above represents just an initial exploration of the intersection of AI and CI. More importantly it is a call for more interdisciplinary conversation and research among two fields that will determine how we solve problems in the future but have largely evolved and been researched separately until now. Do join the conversation and let me know your reactions and suggestions (stefaan [at] thegovlab.org or tweet using @thegovlab)!

(Many of the ideas above were first presented at the meeting of the MacArthur Research Network on Opening Governance prior to the Collective Intelligence Conference. Thanks to all the participants of those two events that provided input. Thanks also to Andrew Young, Knowledge Director at the GovLab for making sure my initial outline got eventually written and expanded upon and thanks to Michelle Winowatan for the research assistance).