Stanford Existential Risks Initiative

gcricover

Stanford Existential Risks Initiative (SERI)

Our Mission

The Existential Risks Initiative is a collaboration between Stanford faculty and students dedicated to mitigating global catastrophic risks (GCRs). Our goal is to foster engagement from students and professors to produce meaningful work aiming to preserve the future of humanity by providing skill, knowledge development, networking, and professional pathways for Stanford community members interested in pursuing GCR reduction. Concrete programming we run includes a summer research program, speaker events, discussions, and a Thinking Matters class taught by the two faculty advisors for the initiative called Preventing Human Extinction (THINK 65).

What is a Global Catastrophic Risk?

 

We think of global catastrophic risks (GCRs) as risks that could cause the collapse of human civilization or even the extinction of the human species. Prominent examples of human-driven global catastrophic risks include 1) nuclear war, 2) an infectious disease pandemic engineered by malevolent actors using synthetic biology, 3) hostile or uncontrolled deployments of artificial intelligence, and 4) climate change and other environmental degradation creating biological and physical conditions that thriving human civilizations would not survive. Other significant GCRs exist as well, and we welcome proposals that address them.

    Summer Research
  • Summer Research
  • THINK 65
  • Future Work
  • Learn More

Summer Research

SERI ran its inaugural Summer Undergraduate Research Fellowship this past summer, funding 20 undergraduate students to work with a mentor (a faculty member or industry professional) to carry out a 10-week research project dedicated to mitigating global catastrophic risks. Through speaker events, discussion groups, and social events, we also educated the cohort on the broader field of existential risks, built up our community of existential risk-focused researchers, and provided opportunities for students to explore career pathways.

Some projects from the program include a project exploring the potential of data visualization to communicate compelling GCR-scale arguments and tackle cognitive biases, a project applying models of economic growth to AI-enabled technological growth in hopes of understanding plausible timelines for transformative AI, and a project designing an ideal building fitted with a variety of engineering non-pharmaceutical interventions to mitigate both seasonal and catastrophic infectious agents.

THINK 65

THINK 65, Preventing Human Extinction, is a class designed for Stanford freshman interested in exploring topics around global catastrophic risk. The class has been taught for the last two years by SERI’s faculty leaders, Paul Edwards and Steve Luby. Students in THINK 65 engage with plausible scenarios by which catastrophe could occur, as well as with prospective solutions. They also discuss the psychological, social and epistemological barriers that inhibit society from recognizing and evaluating these threats.


More information about the class can be found here: https://www.stanforddaily.com/2020/06/01/think-65-examines-paths-to-human-extinction/

Future Work

SERI is a very young group, founded in spring 2020, and we have a lot of other plans in the works. Check back soon to find out more, or get in touch at seri-contact@stanford.edu!

Learn More

If you want to learn more about existential or global catastrophic risks, here’s a list of resources you can refer to:

Recommended readings on GCRs

Organizations and opportunities for GCRs mitigations 
 

2020 Summer Undergraduate Research Fellowship Featured Projects

To see more of our 2020 Summer Undergraduate Research Fellowship Projects, check out our abstract book here.

    A Guide to Engineering Buildings for the Next Pandemic
  • A Guide to Engineering Buildings for the Next Pandemic
  • Cruxes in Forecasting AI Takeoff Speeds
  • A Comparative Analysis of Small States as Models for Existential Risk Mitigation Efforts
  • 未雨绸缪行动提议: An Action Proposal for Mitigating Risks of Potential AI Shock Points in China
  • Nukes and ‘Red Phones’: a Threat Assessment of Hotline Insecurities
  • If Humanity Were One Human
  • Empowering the Voiceless Future: Historical Case Studies
  • How Much Do We Need to Know: A Case Study of Pandemic Influenza Risk in the United States
  • Immunofocusing Strategies to Develop Universal Vaccines
  • Regenerating Topsoil
  • Education at the Intersection of Social Responsibility & the Life Sciences

A Guide to Engineering Buildings for the Next Pandemic

Arvie Violette    |   Full-time fellow
Mentor: Dr. Milana Trounce, Stanford University

More than ever, the global population is at risk for a potentially catastrophic infectious disease outbreak. Amid the concerns of engineered microbes, natural pathogens, and bioterrorism, there is a growing demand for nonspecific, preventative interventions, especially those that don’t rely on the agency of individuals. 

The Guide to Engineering Buildings for the Next Pandemic is an interactive and accessible tool that summarizes the research, implementation, and sustainability behind engineering controls suitable for non-medical buildings. The platform allows users to select qualities about their building and budget in order to recommend specific engineering controls that are most suitable for their project. Behind each engineering control is a comprehensive summary including cost-benefit concerns, methods of implementations, sustainability, and more.

See Arvie's website here.

Cruxes in Forecasting AI Takeoff Speeds

Jack Ryan    |    Full-Time Fellow
Mentor:  Buck Shlegeris, Machine Intelligence Research Institute

My project sought to help answer the question: What will the graph of AI capabilities over time look like, and will there really be an “intelligence explosion” leading in “AI foom”? The question is imperative: if proponents of fast-takeoff are right, then AI will be much more dangerous because alignment problems might be less evident to AI developers and AI safety researchers won’t have as much time to test alignment solutions on weaker but analogous AI systems. 

To better understand this question, I lay out the relevant facts for predicting AI growth rates, as well as what seem to be the cruxes leading to disagreement among AI forecasters. Lastly, I suggest that we need a more formal model of the intelligence explosion as well as further research into the possibility of an AI overhang so that our predictions can be more informed.

Learn more about Jack's project here.

A Comparative Analysis of Small States as Models for Existential Risk Mitigation Efforts

Sam Good    |    Full-Time Fellow
Mentor: Dr. R. James Breiding, S8Nations

Societies of the modern era are being forced to confront a tsunami of pressing catastrophic risks never before seen in human history. Other research establishes the fact that many solutions and effective preparedness efforts in regards to such risks are increasingly coming from small, adaptive nations. This project highlights the ideas and policies of small nations that are aiding in the mitigation of human existential risks. Specifically, through extensive literature review, interviews, and the production of a podcast series, a number of "Big Lessons from Small Nations" were highlighted, with emphasis on future-minded policies and climate change mitigation efforts. The results show a significantly higher density of such efforts in terms of small nations' policies, innovations, and cultural values. The work and research contributed directly to the success of S8Nations, a Zurich-based think tank focused on the global impacts of small nations.
 

Sam’s podcast will be aired later in the fall. See a list of his findings here
 

未雨绸缪行动提议: An Action Proposal for Mitigating Risks of Potential AI Shock Points in China

Zixian (Sunnie) Ma    |    Part-Time Fellow
Mentor: Jeffrey Ding, Future of Humanity Institute

A high-risk AI shock point, in the scope of this project, is defined to be a development or deployment in the field of AI that could cause unexpected and significant risks on our human society, thereby pushing some moral or ethical boundary, creating panic and, often, calls for heightened governance. Powerful yet potentially unsafe or unethical AI applications that count as high-risk shock points, such as autonomous vehicles being deployed prematurely, could cause irreversible and disastrous risks on human society,  which could severely damage if not destroy human potential. 

In this project, I propose an action plan named 未雨绸缪, which recommends actions on mitigating risks of advanced AI applications. Specifically, this project focuses on addressing emerging risks in China’s AI development and deployment as I am personally interested in China’s AI policy, have more knowledge and leverage in this space as a Chinese citizen, and expect an AI shock point to occur in China given a recent biotech shock point in China (i.e. the CRISPR’d babies experiment). It is worth noting that, although my focus is on mitigating risks of AI shock points originating from China, the scope of the risks concerned is still global, and many recommended actions can be adapted to apply in other countries’ AI development and deployment.

Learn more about Zixian's project here

Nukes and ‘Red Phones’: a Threat Assessment of Hotline Insecurities

Katharine Leede   |   Full-time fellow
Mentor: Dr. Herb Lin, Stanford University
Mentor: Dr. Herb Lin, Stanford University

Nuclear weapons pose an existential threat to the world; the most probable cause of full-scale nuclear war may be the result of miscommunication or miscalculation. As a result, reliable and secure communication networks are indispensable during escalatory crises. The purpose of this study is to provide an overview of the ways in which hotline communication can fail in order to answer the question on how they can be improved. Buttressed by case studies, this project will explore a variety of possible impediments to successful hotline communications, including technical, political, psychological, and organizational dimensions.

Learn more about Katharine's project here

If Humanity Were One Human

Odelia Lorch    |    Full-Time Fellow
Mentor: Dr. Andrew Critch, Center for Human Compatible AI, UC Berkeley

We’re each aware of our own mortality, and grapple with this awareness throughout our lives. Yet as much as we may struggle to grasp our personal fates, it’s even harder to grasp the mortality of humanity. This project is a multi-page web visualization that prompts viewers to think about the future of humanity by asking the question, “what if humanity were one human?” 
 

See Odelia's website here

Empowering the Voiceless Future: Historical Case Studies

Mauricio Baker    |    Full-Time Fellow
Mentor: Dr. Jeremy Weinsten, Stanford University

Moral philosophers have compellingly argued that preserving the potential value of Earth's long-term future is of immense importance. For such efforts to succeed, it may be crucial that the interests of future generations receive durable, substantial political protections, which are currently lacking. What efforts would best contribute to such protections? To help answer this question, I first review insights from relevant fields, with a focus on decision-making psychology and institutional theory. Then, I investigate several historical developments that seem to be examples of voiceless groups receiving protection, since we may want to similarly protect the interests of future generations. Drawing on these studies, I create and argue for a qualitative, rational-choice model that makes predictions about when shifts toward greater political inclusion occur and persist. I discuss the model's implications for theory of moral circle expansion, political strategies, and institutional designs, with a focus on how people today can support future generations.

Learn more about Mauricio's project here.

How Much Do We Need to Know: A Case Study of Pandemic Influenza Risk in the United States

Jonathan Lipman    |    Full-Time Fellow
Mentor: Dr. Mehran Sahami, Stanford University

Knowledge provides the power to invest appropriately to further our understanding of long term risks. This paper uses computer modeling (bayesian networks) to help understand the risks from pandemic influenza and the assumptions estimates of those risks are based on. We use sensitivity analysis to determine how greater uncertainty in selected model priors influences projected deaths, decimation, and mortality loss forecasts for pandemic influenza in the United States. We determined that the level of certainty of the probability of a pandemic emerging in a given year is the most significant factor for forecasting the chances that there will be a decimation of the US population over the next 100 years. We can use these insights to guide our prioritization of profiling other, less studied diseases.
 

Learn more about Jonathan's project here.

Immunofocusing Strategies to Develop Universal Vaccines

Harshu Musunuri    |    Full-Time Fellow
Mentor: Cassidy Nelson, Oxford University

Vaccines are a crucial aspect of pandemic preparedness and biosecurity. Currently, platform-based vaccine approaches are not fast enough to tackle novel pathogens, natural or engineered, with high lethality. Universal vaccines that are designed to immunize against an entire family of viruses rather than specific strains circumvent this issue and offer a chance at protection against emerging infections. To do so, these vaccines must elicit broadly neutralizing antibodies that target conserved viral epitopes. I review several existing "immunofocusing" strategies to engineer antigens that enable such universal protection, and opportunities for future research directions. While such vaccines have been explored most extensively in the context of influenza, I argue that universal approaches could be applicable and beneficial to a number of viral families, including flaviviruses, coronaviruses, and paramyxoviruses. Moreover, spillover effects of incentivizing research on universal vaccine development include better deterrence against malicious actors, insights transferrable to broad-spectrum antiviral work, as well as protection against viruses neglected by pharmaceutical companies due to lack of a lucrative market.

Harshu’s research was carried out in collaboration with a lab at Stanford, and they do not wish to share the manuscript prior to publication. It will be linked here once it is publicly available. 

Regenerating Topsoil

Tule Horton   |   Full-time fellow
Mentor: Dr. Julia Novy-Hildesley, Stanford University

Despite relatively little appreciation, topsoil plays a critical role in many of the climate change battles fought today - clean air, clean water, carbon sequestration, and food security. Yet topsoil degradation  is occurring at record rates across the globe, largely driven by intensive agriculture and increasingly harsh weather conditions. Topsoil provides the fertility upon which all other life thrives, and in this way, desertification presents a massive existential risk that threatens the prosperity and health of future generations. 
 
Farmers are rarely recognized as frontline workers in the fight against climate change and existential risk. However, through regenerative agriculture practices such as no-till, cover cropping, and cultivating biodiversity, as well as the use of new technologies including biochar and precision agriculture, farmers are positioned to regenerate topsoil and restore fertility and abundance to our ecosystems. Currently, there are political, economic, technological, and cultural barriers that stand in farmers’ ways. In my research article I outline these challenges as well as preliminary work to overcome such hurdles and encourage the regenerative that will restore soil health and the human health that is often unknowingly but intrinsically linked. 
 

Learn more about Tule's project here

Education at the Intersection of Social Responsibility & the Life Sciences

Michelle Howard    |    Part-Time Fellow
Mentor: Dr. Megan Palmer, Stanford University

Insight on what educational aspects are useful for enabling future life scientists and policy makers to attend to biological risk is important for mitigating socially damaging outcomes from life science developments. Stanford is especially equipped with both the human and technological resources to enable cutting edge research in breakthrough areas such as synthetic biology. This project aims to assist Stanford with teaching life science research leaders to innovate while mitigating the risks of catastrophic outcomes, through an undergraduate education which promotes and instills a social awareness in its graduates. 
 
To assist Stanford with accomplishing this aim, this white paper presents considerations and recommendations for ensuring influencers on life science research have an awareness of potential biosafety and security risks as well as the compulsion to prevent and respond to them appropriately. An education on risk and social awareness among future life science and policy leaders may evade the high costs and public health damages posed by dual use risks, such as engineered pandemics or gene drive releases by preventing their occurrence. This project is based on a preventative ideology which states that through education, apprentices and future leaders in the life sciences with risk aware and self-governing mindsets may notice and halt events with catastrophic outcomes at the university and beyond.
 

Learn more about Michelle's project here.

Contact Us

Please email seri-contact@stanford.edu to get in touch with us!