Sections

Commentary

The most important question when designing AI

May 20, 2024


  • Generative AI has significant potential to benefit humanity. But to do so AI systems need to be designed in ways that support collective intelligence (CI).
  • Emerging research demonstrates how “AI teammates” can be developed to support communication, problem-solving, and decisionmaking in human collectives.
  • Realizing this vision in practical settings and at societal scale will require concerted efforts by technologists, investors, and policymakers to overcome technical issues, address data governance, and demonstrate viable use cases.
Illustrative depiction of Machine and human touching hands, interface
Image source: PopTika/Shutterstock

Generative artificial intelligence (AI) could be the most significant technological breakthrough since the internet. But having witnessed the way commercial pressures have eroded user experience of Big Tech’s consumer internet platforms (what Cory Doctorow vividly describes as “enshittification”), it’s understandable that we might harbor some skepticism. How will these same companies’ development of large-scale AI systems truly enhance outcomes for humanity?

At this nascent stage in the development of generative AI systems, technical experts and policymakers alike have an opportunity to be constructive, by raising mainstream awareness of how the technical design of AI systems can support the interests of people and planet.

Enhancing collective intelligence through AI

When designing AI systems, one useful proxy for the interests of people and planet is collective intelligence. For biological life, all intelligence is collective intelligence (CI). Consider humans: The intelligence of our bodily functions emerges from teamwork of cells; our cognitive intelligence emerges through cooperation among neurons. Similarly, our social intelligence—from spoken language to the creation of modern (and maybe one day sustainable) societies—has emerged from the collaborative efforts of families, teams, communities, and now vast digital networks whose intelligence has surpassed the sum of their parts.

If CI is the underlying logic of human intelligence and perhaps even the sustainability of life itself, then one of the most important questions we can ask when designing AI applications today is: How can AI enhance CI?

Held to this litmus test, existing large-scale AI systems fall short. Predictive analytics and machine learning have certainly helped transform human economic productivity in areas like manufacturing and finance. But these same technologies are yet to meaningfully grasp and augment tacit collective processes of communication, problem-solving, and collaboration that remain crucial to CI and value creation in fields such as education, health care, or business management. Where AI has meaningfully impacted human collectives, such as through social media recommendation algorithms (think Instagram or TikTok), gig work task assignment (Uber or DoorDash), or marketplace pricing (Amazon or Alibaba), it has arguably driven more collective atomization, extremism, and exploitation than CI.

Positioning AI as a teammate

It doesn’t have to be this way. There is a different way to design AI, one that positions AI as a “teammate” within human collectives to systematically enhance CI.

Consider CI of teams. Team-level CI depends on collaboration, understood by behavioral scientists as aligning “mental models” to achieve shared goals. Research shows collaboration is often confounded by our tendency to anchor on information that we already know and our limited ability to fully integrate others’ perspectives into our own mental models, especially when communication load is high. When collaborating to achieve a shared goal (like solving a puzzle for which each team member holds a unique clue), teams whose members have greater social perceptiveness (or Theory of Mind in social psychology) on average achieve higher levels of CI. This is due to their ability to assimilate the information uniquely held by each individual into a shared mental model for collective problem-solving and action.

In this context, Samuel Westby and Christoph Riedl recently showed how AI teammates could bolster CI in teams. They used a corpus of data from a common CI experiment in social psychology to develop AI “digital twins”—virtual models of real entities, equipped with machine learning. Informed by a recently proposed agent-based model of CI, each digital twin was designed with an understanding of the team’s shared goal (to identify relevant information to solve a mystery) and a capacity for social perceptiveness, enabling digital twins to generate beliefs about the mental model of its human twin and the mental models of its human teammates.

Working only with the chat log data of the human participants, digital twin teams outperformed their corresponding human-only teams by an average 11%. Simulations of interventions to assist human teams (e.g., “Player A, I think you should ask Player C what evidence they have about the crime mystery”), improved human team performance by 16%. The AI system also provided a new way to measure causal mechanisms of CI in teams, like social perceptiveness, directly from communication patterns in the data, achieving a 170% improvement over traditional psychometric methods used to assess social perceptiveness in teams.

These results underscore the potential of an AI digital twin system to enhance CI, by functioning as an AI teammate to aid communication, problem-solving, and decisionmaking in human teams. In this study, the AI system also helped discover more global insights about CI in teams, noticing, for example, teams that could tolerate higher levels of uncertainty early on in the experiment (in favor of considering more possible answers) performed better than teams that narrowed more quickly on a smaller set of possible answers. This shows that the role of an AI teammate could even extend to that of an AI digital “AI coach,” helping humans discover organizational principles and develop behaviors that foster CI in different contexts.

Potential use beyond teams

Beyond teams, this same AI teammate architecture could theoretically enhance CI of any collective given sufficient domain knowledge and appropriate data. It could scale up to improve CI in cities, by creating digital twins for each city neighborhood and identifying opportunities for communication, insight sharing, or collaboration between neighborhoods to address shared goals like managing extreme heat events or improving waste management. It could scale down by creating digital twins for body organs, enhancing intelligent and timely communication between them to support overall physical health—a type of anatomical CI. Importantly, an AI teammate is distinct from the idea of an AI co-pilot in that teammates enhance the intelligence of the collective (and not merely the individual within a collective).

Designing effective AI teammate architectures

The essential design elements of an AI teammate architecture are relatively straightforward to describe: (1) digital twins that represent critical information of members of a collective, (2) well-defined collective goals, and (3) social perceptiveness, enabling alignment of mental models between members of a collective. But realizing this vision in practical settings and at societal scale will require concerted efforts by technologists, investors, and policymakers in three key areas.

  1. Overcoming technical challenges

While AI digital twins have been successfully implemented in physical systems like manufacturing supply chains and smart buildings, applying them to support CI in more complex and autonomous living collectives is significantly more challenging. Getting AI teammates from prototype to product will require further targeted investment in advancing AI models (e.g., machine learning techniques that account for the multiscale dynamics of biological collectives), computational architectures that can facilitate trust and explainability in human-machine interactions, and more sophisticated data collection methods that can capture context-sensitive signals of human collective behavior. Practical implementation in teams, for example, will likely necessitate a paradigm shift from “mobile computing” to “pervasive computing,” in which always-on wearable devices—such as the AI Pin, Plaud, or Smart Glasses—replace traditional devices like phones and PCs as the main human-computer interfaces.

  1. Addressing data governance

It is difficult to imagine how AI teammates—and the more pervasive computing interfaces required to support them—will emerge under the current data governance status quo, in which private technology firms and data brokers extract and hoard users’ digital trace and personal identification data for private gain. For users to entrust AI with enhancing CI, policymakers will need to argue for a “data level playing field”, in which individuals and communities own their data and determine with whom they can exchange it and for what purpose. The creation of community-driven data cooperatives, unions, or trusts, and machine learning protocols (such as privacy-preserving federated learning) could enable a paradigm in which individuals and communities contributing their data and insights to enhance CI at different scales could increase their equity in the data economy without sacrificing personal or community privacy.

  1. Demonstrating AI teammates’ transformative potential for humanity

The natural language capabilities of Large Language Models (LLMs) have helped bridge the substantial gap between human users and AI systems, democratizing human access to AI applications, and bringing AI closer to the natural rhythms of human collective life. Accordingly, generative AI is inspiring new ideas for enhancing CI across a range of contexts, from scaling the richness of local-level participatory democracy exercises to inform national-scale policy decisions, to developing an open-source, community-driven AI ecosystem that allows local actors to participate in the training of large-scale generative AI models, or even assigning AI digital twins to other species to give them agency in human efforts to value and regenerate natural ecosystems. A full range of actors—from the entrepreneurs developing new systems to governments and investors backing them—need to work together to ensure promising prototypes can scale to widely accessible solutions.

Conclusion

At this juncture in the development of generative AI systems, AI experts and policymakers can collaborate to raise mainstream awareness how the design of AI systems matters for people and planet. We can start by asking one critical question: Does AI enhance CI? If not, how do we design it so that it does?