Stanford Existential Risks Initiative

gcricover

Stanford Existential Risks Initiative (SERI)

Our Mission

The Stanford Existential Risks Initiative is a collaboration between Stanford faculty and students dedicated to mitigating global catastrophic risks, such as extreme climate change, nuclear winter, global pandemics (and other risks from synthetic biology), and risks from advanced artificial intelligence. Our goal is to foster engagement from both within and beyond the Stanford community to produce meaningful work aiming to preserve the future of humanity. We aim to provide skill-building, networking, professional pathways, and community for students and faculty interested in pursuing existential risk reduction. Our current programs include a research fellowship, an annual conference, speaker events, discussion groups, and a Thinking Matters class (Preventing Human Extinction) taught annually by two of the initiative's faculty advisors.

What is an existential risk?

 

We think of existential risks, or global catastrophic risks, as risks that could cause the collapse of human civilization or even the extinction of the human species. Prominent examples of human-driven global catastrophic risks include 1) nuclear winter, 2) an infectious disease pandemic engineered by malevolent actors using synthetic biology, 3) catastrophic accidents/misuse involving AI, and 4) climate change and/or environmental degradation creating biological and physical conditions that thriving human civilizations would not survive. Other significant catastrophic risks exist as well. 

    Existential Risks Conference (April 17-18)
  • Existential Risks Conference (April 17-18)
  • Summer Research Internship (open to all)
  • THINK 65
  • Discussion Groups
  • Future Work
  • Learn More

Existential Risks Conference (April 17-18)

SERI is currently planning a two-day (April 17/18) virtual conference on existential risks and longtermism - the importance of prioritizing the long-term future, for over 500 academics, professionals, and students from around the world.

Our speakers include Rose Gottemoeller (Stanford) - Former Deputy Secretary General of NATO, Toby Ord (Oxford) - author of The Precipice, and Stuart Russell (UC Berkeley) - author of Human Compatible: Artificial Intelligence and the Problem of Control and Artificial Intelligence - A Modern Approach.

To learn more and apply (by April 12th), see http://sericonference.org/ 

Goals of the conference include:

  • Creating common understanding: of the importance and magnitude of existential risks, of longtermism - making safeguarding the long-term future an academic and ethical priority, and of the existing field of existential risk mitigation
  • Creating connections: among potential collaborators, funders and grantees, employers and employees, mentors and mentees, and others. 
  • Creating opportunities: for further engagement through research, careers/internships, collaborations, grants/funding, and more.

 

Conference structure/schedule:

The conference will consist of a combination of panels, speaker events, Q&As, workshops, socials/networking, and more. The conference will take place April 17th-18th (Saturday and Sunday) from 9AM - 6PM PDT.

Summer Research Internship (open to all)

The Stanford Existential Risks Initiative is collaborating with researchers from the Future of Humanity Insitute to offer a full-time ten-week summer research internship for accepted researchers to work on projects mitigating  existential risks. Accepted researchers will also have the opportunity to work with a mentor.

Apply here: http://tinyurl.com/seri21summer. ​Applications are due 11:59 PM PDT on Wednesday, April 21st. Let us know if you need an early decision in the application, and we can review it sooner.

The program aims to build a community of existential risk researchers via cause-area subgroups, speaker events, discussion groups, 1:1s, and social events. Fellows will work with a mentor (a faculty member, PhD/postdoctoral scholar, or domain expert), who will provide weekly advising along with guidance on projects and their design. An existential risk is a risk of human extinction, irreversible civilizational collapse, or some other permanent curtailment of humanity's future potential.

Prominent examples of these risks include 1) engineered pandemics, 2) catastrophic accidents involving transformative artificial intelligence, 3) nuclear war and subsequent nuclear winter, and 4) extreme, irreversible environmental devastation. We are excited to support research ranging from purely technical research to the philosophical, as long as there is direct relevance to mitigating existential risk. We are looking for projects that, if successful, would slightly reduce the likelihood that human civilization will permanently collapse, that humans will go extinct, or that the potential of humanity will be otherwise reduced. For guidance in generating a project proposal, including examples of past projects, please see this resource: https://tinyurl.com/seri-21-app-advice. To apply, please fill out the form below.

If you have any questions or would like to discuss potential projects, please reach out to seri-contact@stanford.edu, or come to office hours at one of the times listed here: https://docs.google.com/spreadsheets/d/1Tr3Q9cuajs3lreeRIcwpERESqo5vsojgSC5zJSTCJLk/edit#gid=0

Some projects from past programs include a project exploring the potential of data visualization to communicate compelling GCR-scale arguments and tackle cognitive biases, a project applying models of economic growth to AI-enabled technological growth in hopes of understanding plausible timelines for transformative AI, and a project designing an ideal building fitted with a variety of engineering non-pharmaceutical interventions to mitigate both seasonal and catastrophic infectious agents.

 If you have any questions, please consult the FAQs on the application forms or come to our office hours.

THINK 65

THINK 65, Preventing Human Extinction, is a class designed for Stanford freshman interested in exploring topics around global catastrophic risk. The class has been taught for the last two years by SERI’s faculty leaders, Paul Edwards and Steve Luby. Students in THINK 65 engage with plausible scenarios by which catastrophe could occur, as well as with prospective solutions. They also discuss the psychological, social and epistemological barriers that inhibit society from recognizing and evaluating these threats.


More information about the class can be found here: https://www.stanforddaily.com/2020/06/01/think-65-examines-paths-to-human-extinction/

Discussion Groups

SERI runs several discussion groups for students interested in existential risk to learn and share ideas together. Groups include:

Existential Risk and Social Sciences Reading Group

The Stanford Existential Risks Initiative is running an introductory reading group on Existential Risks & the Social Sciences. The group reads about and discusses insights from the social sciences into how we can mitigate existential risks — catastrophes like extreme climate change or nuclear war that could end human life on Earth. This reading group is wide-open for students who are new to studying existential risks, and students from all universities. The link to sign up is here.

 

Introductory Transformative AI Safety Reading Group

SERI runs a quarterly intro-level reading group for AI Safety. The group aims to map out the landscape of diverse potential AI risks, from agent alignment to governance and regulation of current AI. An additional goal is to produce a high-quality summary of readings and discussions, to be made publicly available afterwards to serve as a valuable resource for other groups or individuals looking to start orienting themselves in AI Safety. Readings for the group are linked in this doc.

 

Bio-security Reading Group (Collaboration with Cambridge)

Students from SERI and Cambridge are collaborating to run a reading group for community members interested in using their careers to do good in the area of biosecurity and mitigate global catastrophic biological risks. This includes those who are considering this at an early stage, those who are early-career, and those who are already established in this field. The group meets every two weeks.

 

The Precipice Reading Group

SERI runs a weekly Book Club on the recently (March 2020) released The Precipice: Existential Risk and the Future of Humanity by Oxford philosophy professor Toby Ord. This urgent and eye-opening book makes the case that protecting humanity's future and mitigating existential risks are the central challenges of our time. An existential risk is one that threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development (Bostrom, 2002).

 

Future Work

SERI is a very young group, founded in spring 2020, and we have a lot of other plans in the works. Check back soon to find out more, or get in touch at seri-contact@stanford.edu!

Learn More

If you want to learn more about existential or global catastrophic risks, here’s a list of resources you can refer to:

Recommended readings on GCRs

Organizations and opportunities for GCRs mitigations 

Join our Mailing List
 

2020 Summer Undergraduate Research Fellowship Featured Projects

To see more of our 2020 Summer Undergraduate Research Fellowship Projects, check out our abstract book here.

    A Guide to Engineering Buildings for the Next Pandemic
  • A Guide to Engineering Buildings for the Next Pandemic
  • Cruxes in Forecasting AI Takeoff Speeds
  • A Comparative Analysis of Small States as Models for Existential Risk Mitigation Efforts
  • 未雨绸缪行动提议: An Action Proposal for Mitigating Risks of Potential AI Shock Points in China
  • Nukes and ‘Red Phones’: a Threat Assessment of Hotline Insecurities
  • If Humanity Were One Human
  • Empowering the Voiceless Future: Historical Case Studies
  • How Much Do We Need to Know: A Case Study of Pandemic Influenza Risk in the United States
  • Immunofocusing Strategies to Develop Universal Vaccines
  • Regenerating Topsoil
  • Education at the Intersection of Social Responsibility & the Life Sciences

A Guide to Engineering Buildings for the Next Pandemic

Arvie Violette    |   Full-time fellow
Mentor: Dr. Milana Trounce, Stanford University

More than ever, the global population is at risk for a potentially catastrophic infectious disease outbreak. Amid the concerns of engineered microbes, natural pathogens, and bioterrorism, there is a growing demand for nonspecific, preventative interventions, especially those that don’t rely on the agency of individuals. 

The Guide to Engineering Buildings for the Next Pandemic is an interactive and accessible tool that summarizes the research, implementation, and sustainability behind engineering controls suitable for non-medical buildings. The platform allows users to select qualities about their building and budget in order to recommend specific engineering controls that are most suitable for their project. Behind each engineering control is a comprehensive summary including cost-benefit concerns, methods of implementations, sustainability, and more.

See Arvie's website here.

Cruxes in Forecasting AI Takeoff Speeds

Jack Ryan    |    Full-Time Fellow
Mentor:  Buck Shlegeris, Machine Intelligence Research Institute

My project sought to help answer the question: What will the graph of AI capabilities over time look like, and will there really be an “intelligence explosion” leading in “AI foom”? The question is imperative: if proponents of fast-takeoff are right, then AI will be much more dangerous because alignment problems might be less evident to AI developers and AI safety researchers won’t have as much time to test alignment solutions on weaker but analogous AI systems. 

To better understand this question, I lay out the relevant facts for predicting AI growth rates, as well as what seem to be the cruxes leading to disagreement among AI forecasters. Lastly, I suggest that we need a more formal model of the intelligence explosion as well as further research into the possibility of an AI overhang so that our predictions can be more informed.

Learn more about Jack's project here.

A Comparative Analysis of Small States as Models for Existential Risk Mitigation Efforts

Sam Good    |    Full-Time Fellow
Mentor: Dr. R. James Breiding, S8Nations

Societies of the modern era are being forced to confront a tsunami of pressing catastrophic risks never before seen in human history. Other research establishes the fact that many solutions and effective preparedness efforts in regards to such risks are increasingly coming from small, adaptive nations. This project highlights the ideas and policies of small nations that are aiding in the mitigation of human existential risks. Specifically, through extensive literature review, interviews, and the production of a podcast series, a number of "Big Lessons from Small Nations" were highlighted, with emphasis on future-minded policies and climate change mitigation efforts. The results show a significantly higher density of such efforts in terms of small nations' policies, innovations, and cultural values. The work and research contributed directly to the success of S8Nations, a Zurich-based think tank focused on the global impacts of small nations.
 

Sam’s podcast will be aired later in the fall. See a list of his findings here
 

未雨绸缪行动提议: An Action Proposal for Mitigating Risks of Potential AI Shock Points in China

Zixian (Sunnie) Ma    |    Part-Time Fellow
Mentor: Jeffrey Ding, Future of Humanity Institute

A high-risk AI shock point, in the scope of this project, is defined to be a development or deployment in the field of AI that could cause unexpected and significant risks on our human society, thereby pushing some moral or ethical boundary, creating panic and, often, calls for heightened governance. Powerful yet potentially unsafe or unethical AI applications that count as high-risk shock points, such as autonomous vehicles being deployed prematurely, could cause irreversible and disastrous risks on human society,  which could severely damage if not destroy human potential. 

In this project, I propose an action plan named 未雨绸缪, which recommends actions on mitigating risks of advanced AI applications. Specifically, this project focuses on addressing emerging risks in China’s AI development and deployment as I am personally interested in China’s AI policy, have more knowledge and leverage in this space as a Chinese citizen, and expect an AI shock point to occur in China given a recent biotech shock point in China (i.e. the CRISPR’d babies experiment). It is worth noting that, although my focus is on mitigating risks of AI shock points originating from China, the scope of the risks concerned is still global, and many recommended actions can be adapted to apply in other countries’ AI development and deployment.

Learn more about Zixian's project here

Nukes and ‘Red Phones’: a Threat Assessment of Hotline Insecurities

Katharine Leede   |   Full-time fellow
Mentor: Dr. Herb Lin, Stanford University
Mentor: Dr. Herb Lin, Stanford University

Nuclear weapons pose an existential threat to the world; the most probable cause of full-scale nuclear war may be the result of miscommunication or miscalculation. As a result, reliable and secure communication networks are indispensable during escalatory crises. The purpose of this study is to provide an overview of the ways in which hotline communication can fail in order to answer the question on how they can be improved. Buttressed by case studies, this project will explore a variety of possible impediments to successful hotline communications, including technical, political, psychological, and organizational dimensions.

Learn more about Katharine's project here

If Humanity Were One Human

Odelia Lorch    |    Full-Time Fellow
Mentor: Dr. Andrew Critch, Center for Human Compatible AI, UC Berkeley

We’re each aware of our own mortality, and grapple with this awareness throughout our lives. Yet as much as we may struggle to grasp our personal fates, it’s even harder to grasp the mortality of humanity. This project is a multi-page web visualization that prompts viewers to think about the future of humanity by asking the question, “what if humanity were one human?” 
 

See Odelia's website here

Empowering the Voiceless Future: Historical Case Studies

Mauricio Baker    |    Full-Time Fellow
Mentor: Dr. Jeremy Weinsten, Stanford University

Moral philosophers have compellingly argued that preserving the potential value of Earth's long-term future is of immense importance. For such efforts to succeed, it may be crucial that the interests of future generations receive durable, substantial political protections, which are currently lacking. What efforts would best contribute to such protections? To help answer this question, I first review insights from relevant fields, with a focus on decision-making psychology and institutional theory. Then, I investigate several historical developments that seem to be examples of voiceless groups receiving protection, since we may want to similarly protect the interests of future generations. Drawing on these studies, I create and argue for a qualitative, rational-choice model that makes predictions about when shifts toward greater political inclusion occur and persist. I discuss the model's implications for theory of moral circle expansion, political strategies, and institutional designs, with a focus on how people today can support future generations.

Learn more about Mauricio's project here.

How Much Do We Need to Know: A Case Study of Pandemic Influenza Risk in the United States

Jonathan Lipman    |    Full-Time Fellow
Mentor: Dr. Mehran Sahami, Stanford University

Knowledge provides the power to invest appropriately to further our understanding of long term risks. This paper uses computer modeling (bayesian networks) to help understand the risks from pandemic influenza and the assumptions estimates of those risks are based on. We use sensitivity analysis to determine how greater uncertainty in selected model priors influences projected deaths, decimation, and mortality loss forecasts for pandemic influenza in the United States. We determined that the level of certainty of the probability of a pandemic emerging in a given year is the most significant factor for forecasting the chances that there will be a decimation of the US population over the next 100 years. We can use these insights to guide our prioritization of profiling other, less studied diseases.
 

Learn more about Jonathan's project here.

Immunofocusing Strategies to Develop Universal Vaccines

Harshu Musunuri    |    Full-Time Fellow
Mentor: Cassidy Nelson, Oxford University

Vaccines are a crucial aspect of pandemic preparedness and biosecurity. Currently, platform-based vaccine approaches are not fast enough to tackle novel pathogens, natural or engineered, with high lethality. Universal vaccines that are designed to immunize against an entire family of viruses rather than specific strains circumvent this issue and offer a chance at protection against emerging infections. To do so, these vaccines must elicit broadly neutralizing antibodies that target conserved viral epitopes. I review several existing "immunofocusing" strategies to engineer antigens that enable such universal protection, and opportunities for future research directions. While such vaccines have been explored most extensively in the context of influenza, I argue that universal approaches could be applicable and beneficial to a number of viral families, including flaviviruses, coronaviruses, and paramyxoviruses. Moreover, spillover effects of incentivizing research on universal vaccine development include better deterrence against malicious actors, insights transferrable to broad-spectrum antiviral work, as well as protection against viruses neglected by pharmaceutical companies due to lack of a lucrative market.

Harshu’s research was carried out in collaboration with a lab at Stanford, and they do not wish to share the manuscript prior to publication. It will be linked here once it is publicly available. 

Regenerating Topsoil

Tule Horton   |   Full-time fellow
Mentor: Dr. Julia Novy-Hildesley, Stanford University

Despite relatively little appreciation, topsoil plays a critical role in many of the climate change battles fought today - clean air, clean water, carbon sequestration, and food security. Yet topsoil degradation  is occurring at record rates across the globe, largely driven by intensive agriculture and increasingly harsh weather conditions. Topsoil provides the fertility upon which all other life thrives, and in this way, desertification presents a massive existential risk that threatens the prosperity and health of future generations. 
 
Farmers are rarely recognized as frontline workers in the fight against climate change and existential risk. However, through regenerative agriculture practices such as no-till, cover cropping, and cultivating biodiversity, as well as the use of new technologies including biochar and precision agriculture, farmers are positioned to regenerate topsoil and restore fertility and abundance to our ecosystems. Currently, there are political, economic, technological, and cultural barriers that stand in farmers’ ways. In my research article I outline these challenges as well as preliminary work to overcome such hurdles and encourage the regenerative that will restore soil health and the human health that is often unknowingly but intrinsically linked. 
 

Learn more about Tule's project here

Education at the Intersection of Social Responsibility & the Life Sciences

Michelle Howard    |    Part-Time Fellow
Mentor: Dr. Megan Palmer, Stanford University

Insight on what educational aspects are useful for enabling future life scientists and policy makers to attend to biological risk is important for mitigating socially damaging outcomes from life science developments. Stanford is especially equipped with both the human and technological resources to enable cutting edge research in breakthrough areas such as synthetic biology. This project aims to assist Stanford with teaching life science research leaders to innovate while mitigating the risks of catastrophic outcomes, through an undergraduate education which promotes and instills a social awareness in its graduates. 
 
To assist Stanford with accomplishing this aim, this white paper presents considerations and recommendations for ensuring influencers on life science research have an awareness of potential biosafety and security risks as well as the compulsion to prevent and respond to them appropriately. An education on risk and social awareness among future life science and policy leaders may evade the high costs and public health damages posed by dual use risks, such as engineered pandemics or gene drive releases by preventing their occurrence. This project is based on a preventative ideology which states that through education, apprentices and future leaders in the life sciences with risk aware and self-governing mindsets may notice and halt events with catastrophic outcomes at the university and beyond.
 

Learn more about Michelle's project here.

Contact Us

Please email seri-contact@stanford.edu to get in touch with us!