Navigating the Complexities of Existential Risk: Insights from the 2023 Stanford Existential Risks Conference

SERI

In an era defined by technological advancement and global interconnectivity, humanity faces an array of existential risks that challenge our very existence. The 2023 Stanford Existential Risks (SERI) conference delved deep into various fields, from epistemology to psychology, aiming to shed light on strategies for navigating these complex threats. 

Despite the daunting nature of these challenges, the conference highlighted potential solutions, including the importance of Collective Intelligence and diverse perspectives. Here, research scholars and faculty members of SERI, Trond Arne Undheim, Paul Edwards, Steve Luby and Dan Zimmer, thoroughly examine the major global risks and effective strategies discussed at the 2023 conference.

Section I: Epistemology, Psychology, and Aesthetics

  1. “Epistemic Security” is proposed as a priority global catastrophic risk. How did the conference address the implications of this concept and its rising threshold level for a society’s ability to effectively prepare for or respond to various risks and crises?

Trond: In our technologically advanced age, deciding who and what to believe is increasingly difficult, even for those trained in assessing knowledge. Epistemic security, a term denoting the resilience of societal knowledge systems, was a focal point at the conference. Elizabeth Seger, researcher at the Centre for the Governance of AI (GovAI) in Oxford, and a research affiliate at the AI: Futures and Responsibility Project (AI:FAR) at the University of Cambridge, highlighted the escalating threats to our epistemic security in her paper, suggesting that our ability to respond to crises hinges on bolstering this resilience (Seger, 2023). Seger outlined three features of epistemic security within a society: information accessibility, information environment safety, and information recipient sensitivity. The higher epistemic security, the more resilient humanity will be towards any calamity we might face in the future. Given that it is a “fix all” strategy, it might make sense for humanity to invest significantly in strengthening this feature of our societies.

  1. How did the conference use historical examples and the concept of “Agents of Doom” to better understand and prepare for intentional risks in the 21st century?

Dan: Each period of interest in the study of existential risk (x-risk) has been catalyzed by a different kind of risk and marked by the historical and philosophical context in which it arose, whether this be the sense of all-or-nothing existential confrontation that marked the early Cold War (Jaspers, 1961) or the discovery of new degrees of ecological entanglement in the 1980s (Schell, 2000). Today’s study of x-risk is no different. Concern about the x-risk of AI first came to the fore in the 1990s among a cadre of transhumanists and futurists. (Bostrom, 2005; More, 2013).1 Philosopher Nick Bostrom’s influential typology broadened the scope of x-risk studies, encompassing not just biological species but also potential postbiological successors. However, criticism has arisen regarding the narrow philosophical and methodological foundations of the field. With this history in mind, the Stanford Existential Risks Initiative (SERI) devoted its third annual Existential Risks Conference to the task of helping to broaden the basis of x-risk studies. Conference organizers invited submissions that addressed the past, present, and possible futures of x-risk studies, seeking to showcase the contributions of scholars and practitioners from a broader range of disciplinary and institutional backgrounds Sothan typically feature at x-risk gatherings.

Paul: During the conference, Dr. Émile Torres spoke about several actual “agents of doom”: individuals and groups who explicitly sought to kill vast numbers of people. Hitler’s Holocaust, Stalin’s purges and gulags, and Mao’s “cultural revolution” remain the largest historical examples of mass killing. None of these leaders sought to end all human life, but others have. One example is Shoko Asahara, leader of the Japanese doomsday cult Aum Shinrikyo, who hoped to provoke a nuclear Armageddon to hasten the arrival of “end times” that only a few faithful would survive. Torres’s point: while most people find it impossible to imagine deliberately exterminating humanity, a few can – and do. With sufficient prowess in dangerous technologies and/or skill at social engineering, such people may yet find ways to bring about mass death on a global scale (Torres, 2023). 

  1. In addressing the psychological effects and public health risks of cascading global crises, particularly concerning vulnerabilities in one-third of the global population, what insights were shared during discussions? Were there explorations of practical strategies, including the proposed concept of an “antifragile mindset?”

Steve: Led by David D. Luxton and Eleanore Watson (Luxton and Watson 2023), we had a fruitful debate on the psychological effects, and public health risks, of all the challenges humanity is facing. One would be the potential impact of emerging technologies on skills needed in the labor market. Those that don’t manage to keep up might find themselves out of fruitful labor in the near future. Another issue is the algorithmic and systematic distortions of reality and behavioral manipulation of influential AI systems, including prejudice and unfairness. Thirdly, these technologies tend to give supernormal stimuli which might become more engaging than real life counterparts. For all these problems, there are psychological and public health effects to be mindful of. Another speaker, Dana Klisanin, introduced antifragility as an evolutionary trait but noted its interpretation varies across cultural context (Klisanin, 2023).

Steve: The health of the over 8 billion people who live on the planet is dependent upon everyday interactions of the economy that is supported by local, national and international trade. This trade of life-sustaining goods requires trust, coordination and reciprocity. Several risks could undermine this trust and the economic system. This cascading dysfunction, then threatens communities with life-threatening shortages of energy, water and food.

Section II: Crises in the Earth System

  1. Exploring the potential cascading effects into society from transgressed Planetary Boundaries, particularly in agriculture and food security, how did the conference delve into the interconnectedness of these issues? Furthermore, what implications were discussed regarding these cascades for both known and unforeseen Global Catastrophic Risks?

Paul: Florian Jehn representing the nonprofit organization ALLFED, delivered a presentation on the concept of “planetary boundaries,” or limits of human interference in the Earth system, recently delineated by such scientists as Johan Rockström and Jan Zalasiewicz. Jehn’s speculative question: how might present-day efforts to stay within planetary boundaries affect the survivors of an all-out US-Russia nuclear war? They would live amidst a nuclear winter that could cause even further suffering and death, with diminished photosynthesis causing massive crop failures. Jehn pointed out the paradox that while we currently seek to reduce nitrogen emissions from fertilizer overuse, increased nitrogen emissions today could serve as a nutrient buffer during nuclear winter, when artificial fertilizer might be unavailable. Similarly, mitigating climate change today could increase the temperature drop that would occur during a nuclear winter (Jehn, 2023).

  1. How did the conference address worries about climate change threats, including issues like policy reversals, the impact of artificial intelligence and social media on climate denialism, and challenges in global governance? Were there discussions on strategies to tackle these challenges, especially related to climate policy changes and misinformation spread by technology?

Paul:  My contribution to the conference asked: “Is climate change ungovernable?” I first reviewed the potential for catastrophic, civilization-threatening climate change within the next two to three centuries. Next, I argued that empirical evidence supports the likelihood of future climate policy reversals by major emitters. Major policy reversals have already occurred. Further, climate denialism and misdirection are already being amplified by artificial intelligence and social media. The depressing conclusion: chances are high that current structures of climate governance will not succeed in preventing catastrophic levels of climate change (Edwards, 2023).

Section III: Risk Intersections

  1. Highlighting the imperative for a substantial redesign in institutions’ approaches to biosecurity education, what specific discussions unfolded regarding the identified approaches to biosecurity education? Also, were the conversations about strategies aimed at promoting responsible research practices and minimizing the risks of catastrophic events stemming from the misuse of dual-use research of concern (DURC)?

Trond:  Sofya Lebedeva addressed the vital topic of dual use biosecurity education (Lebedeva, 2023), and maintains that institutions need to fundamentally redesign their approaches, centered around better training and coursework provided to their employees and bench scientists. According to her, seven key themes emerge as markers for the success of a course; length, context, personal connection and instructor excellency. Tailoring the education approach to the target audience is especially important, otherwise they might feel it is irrelevant to them. These are simple fixes for an important problem, which was interesting to note.

  1. With a focus on Dual-Use Research of Concern principles across various fields, how did the conference address risks related to dual-use technologies in information technology and artificial intelligence? Were there talks about limiting their spread in today’s complex security landscape?

Trond:  Dual-use technologies in AI have pros and cons, they can benefit human security but also hamper it. At the conference, Ashok Vaseashta emphasized that unlike traditional threats like nuclear weapons, AI’s widespread integration into open societies presents unique risks (Vaseashta, 2023). Unmanned Aerial Vehicles (UAVs) and Electronic warfare (EW) are other concerns to be mindful of. Curbing proliferation is difficult and, in most cases, does not make much sense. Instead, he says, organizations must ensure the reliability, integrity, and security of AI systems through robust authentication mechanisms, secure development practices, and regular security audits. This is especially salient in the case of cyber defense and cybersecurity, where AI/ML can be used to better detect, respond, recover, and adapt after the attack. New risks emerging from AI include the possibility that it can help generate recipes for toxins or pathogens.

  1. Focusing on the positive and negative aspects of artificial intelligence, particularly in terms of AI fairness addressing biases, how did the discussions at the conference explore the analysis of AI fairness and its societal impact?

Trond: Addressed by Ondrei Bohdal and colleagues, AI systems could harm parts of the population due to biased predictions. AI fairness focuses on mitigating such biases to ensure AI decision making is not discriminatory towards certain groups. One of the issues they pointed to was the feedback loop as new AI models are trained on biased AI models (Bohdal et al., 2023).

  1. Did the conference discuss the complex issue of nuclear weapons in today’s international security, considering the critique of the USA and Russia leaving the Intermediate-Range Nuclear Forces treaty (INF)? Were there talks about the potential for a looming nuclear war, despite the intended purpose of deterrence?

Trond:  We delved into the pressing issue of nuclear weapons with the insights of Stanford professor J.P Dupuy, a philosopher. He pointed out that there is a looming nuclear war looms over our heads simply because nuclear weapons exist. Their existence implies that it is always possible that these weapons may be used at some point (Dupuy, 2023). Dupuy has since published in English his book The War That Must Not Occur ‎(Stanford University Press, 2023), which previously appeared in French.

Section IV: Governance, Policy Infrastructure, and Scenarios

  1. In exploring the transformative potential of Collective Intelligence (CI), the shared or group intelligence that emerges when people work together to solve complex problems, to enhance resilience against diverse risks, especially within human groups, were there discussions on how CI intersects with effectively mitigating global catastrophic risks? 

Trond: Processing distributed information effectively is a big challenge, especially during a crisis.. Vicky Chuqiao Yang and Anders Sandberg discussed collective intelligence, highlighting its potential to enhance overall resilience (Yang and Sandberg, 2023). They maintained that one main benefit is that building a better information infrastructure could improve human group performance, which might be needed to survive an existential crisis. Work on CI can also teach lessons to learn about how humans can work better together. Human-AI collaboration is also in its infancy, but once we figure out the optimal approach, that can also help in a crisis.

  1. As the conference navigated through diverse approaches to existential risks, what defining convergences in policy support emerged?

 Dan:  Taken together, the papers presented at the conference addressed risk agency, developments in artificial intelligence and machine learning systems, biorisks, overstepped planetary boundaries, ecological collapse catalyzed by nuclear winter, collective intelligence, crisis governance, dual-use considerations, policy concerns, psychological impact, the securitization of risk, and scenario planning. By highlighting where these paths intersect, the conference shed light on the interconnected nature of global catastrophic risks and the potential for cascading effects. The breadth of perspectives presented reflects the belief that x-risk studies should not only focus on preventing human extinction but also foster a broader dialogue on what makes human existence meaningful and how to navigate risks while maintaining an open future.

  1. Unveiling the potential impact of the International Panel on Global Catastrophic Risks (IPGCR) in addressing looming global threats, what defining insights emerged about its focus on expert reports, intricate risk intersections, and conveying expert opinions without necessarily seeking consensus?

Dan:  A glance at the history of the 20th century reveals the disturbing ease with which attempts to save humankind from extinction can transform into justifications for excluding, expropriating, or even exterminating certain kinds of humans for the sake of the greater good. X-risk studies cannot avoid engaging with the hard choices that will be required to balance human flourishing with survival; instead, it is precisely because these choices are so important and unavoidable that they cannot remain the province of a small, homogenous group of self-appointed specialists. It is all too easy to conflate the end of one’s way of life with the end of the world as a whole, and it is only by including a plurality of perspectives that x-risk studies can overcome the inevitable parochialism and self-interest that creeps into discussions of who must sacrifice what to ensure human survival and the possibility of flourishing. Only a pluralistic field of x-risk studies can hope to identify its own blindspots and credibly resist claims that it conflates the continuation of patriarchy or extractive capitalism or white supremacy or Euro-American imperialism with the survival of humankind as a whole. Here diversity is strength. Significant ethical and methodological disagreement represents both a hallmark of epistemic health and an inoculation against the kind of groupthink that could sanction almost any loss of life or freedom for the sake of survival. Only through the hard work of engaging in sustained dialogue across differences concerning why human existence should be preserved can the field of x-risk studies continue to productively address how best to diminish today’s growing welter of anthropogenic existential risks.

Click here for a full list of conference proceedings

Below are links to sites for the podcast series from Trond Arne Undheim's podcast Futurized, where he further discusses conference proceedings with the authors and speakers:

Epistemic Security with Elizabeth Seger

Web: https://www.futurized.org/epistemic-security/

LinkedIn: https://www.linkedin.com/feed/update/urn:li:activity:7178374232035229696

Twitter: https://twitter.com/Futurized2/status/1772610301738770815

Food After Nuclear winter with Florian Jehn 

Web: https://www.futurized.org/food-after-nuclear-winter/

LinkedIn: https://www.linkedin.com/feed/update/urn:li:activity:7176182703245930496

Twitter: https://twitter.com/Futurized2/status/1772618585535315978

Existential Risks with Dan Zimmer

Web: https://www.futurized.org/existential-risks/

LinkedIn: https://www.linkedin.com/feed/update/urn:li:activity:7173321030189441026

Twitter: https://twitter.com/Futurized2/status/1767557498905534495

Existential Risk Controversies with Emile Torres

Web: https://www.futurized.org/existential-risk-controversies/

LinkedIn: https://www.linkedin.com/feed/update/urn:li:activity:7150505939387781120

Twitter: https://twitter.com/Futurized2/status/1744744836404486319