Governing the AI Revolution: The Research Landscape
William J. Perry Conference Room
Encina Hall, 2nd floor
616 Serra Street
Stanford, CA 94305
Abstract: Artificial intelligence (AI) is rapidly improving. The opportunities are tremendous, but so are the risks. Existing and soon-to-exist capabilities pose several plausible extreme governance challenges. These include massive labor displacement, extreme inequality, an oligopolistic global market structure, reinforced authoritarianism, shifts and volatility in national power, and strategic instability. Further, there is no apparent ceiling to AI capabilities, experts envision that superhuman capabilities in strategic domains will be achieved in the coming four decades, and radical surprise breakthroughs are possible. Such achievements would likely transform wealth, power, and world order, though global politics will in turn crucially shape how AI is developed and deployed. The consequences are plausibly of a magnitude and on a timescale to dwarf other global concerns, leaders of governments and firms are asking for policy guidance, and yet scholarly attention to the AI revolution remains negligible. Research is thus urgently needed on the AI governance problem: the problem of devising global norms, policies, and institutions to best ensure the beneficial development and use of advanced AI.
This problem can be broken into three complementary research clusters:
- The technical landscape: What are the trends and possibilities in AI capabilities? What are their likely consequences? What are the externalities from AI, and how can they best be addressed?
- AI politics: Who are the relevant actors, what are their interests, and what can they do? What is the nature of the conflict and cooperation challenges that they are likely to face? How can they overcome dangerous conflictual dynamics, in particular an international arms race?
- AI governance: Given our understanding of the technical landscape and AI politics, what options are available to us for global governance of AI and what should we work towards?
Work on the AI governance problem must draw on the full body of social science and policy expertise. Solutions are needed by an unknown, but plausibly impending, deadline.
Speaker Bio: Allan Dafoe is an Assistant Professor of Political Science at Yale University and a Research Associate at the Future of Humanity Institute, University of Oxford. His research seeks to understand the causes of world peace and stability. Specifically, his research has examined the causes of the liberal peace, and the role of reputation and honor as motives for war. He develops methodological tools and approaches to enable more transparent, credible causal inference. Allan is beginning research on the international politics of transformative artificial intelligence.