Fellowship Spotlight: Johanna Rodehau-Noack

Fellowship Spotlight: Johanna Rodehau-Noack

A new feature highlighting the work of CISAC fellows
Johanna Rodehau-Noack Headshot Photo credit: Rod Searcey

The Center for International Security and Cooperation (CISAC), offers a rich variety of fellowships that allow early-career scholars to focus on a variety of security topics and participate in seminars to interact and collaborate with leading faculty and researchers.

In this Q&A second-year CISAC fellow Dr. Johanna Rodehau-Noack discusses the challenges of using metaphors in understanding conflicts and the limitations of analogies in framing the risks associated with AI.

You've written about how war is often described using medical terms, like an epidemic or a disease. Why do you think these metaphors are popular, and how do they influence the way we think about and respond to conflicts?

Metaphors are mental aids that help people imagine things that are otherwise hard to imagine, because they are abstract concepts, not part of their everyday lives or cultural experiences. This ability of metaphors to translate unfamiliar things into a familiar realm is what makes them so appealing, I think. We frequently use metaphors in our daily lives, often without being aware of it (think: heart of gold, time is money, apple of my eye). Because we use metaphors so widely, they are also found in policy, such as the prevention of armed conflict, which is one of my focus areas. In my research, I show that using these metaphors that equate armed conflict and war to illnesses and infections invokes a particular idea about how the world works, namely in a way in which war is an affliction that ails the body of the state, while the ‘healthy’ normal state of the world is where states are at peace. By looking at historical and contemporary documents, I demonstrate that disease metaphors are used to either characterize war as a moral failure that needs to be corrected for the sake of ‘civilization,’ or, as is the case with more recent prevention policy, to represent war as a rather technical-scientific problem akin to public health that can be resolved with modern governance interventions.

In the article Today’s AI threat: More like nuclear winter than nuclear war, you and your colleague Dan Zimmer critique the analogy between AI risks and nuclear threats. Could you explain why this analogy is misleading and what a more accurate framework for understanding AI risks would look like?

My colleague Dan Zimmer and I argue that drawing an analogy between the risks posed by AI and the risks carried by nuclear weapons, as is widely popular these days in both the public and policy discourse, is understandable but misleading for several reasons. First, there are notable parallels between the two technologies – such as the pace at which they have developed in some phases or the wide-reaching impact that many assume they will have – but also important differences. Nuclear weapons technology is squarely the domain of states, while the forerunners in AI development nowadays are private sector companies. For this reason, the two technologies also proliferate quite differently – you can’t usually build a nuclear weapon in your basement, but you can deploy a whole menu of AI algorithms for all kinds of purposes, nefarious or beneficial, with a laptop from the comfort of your home. Even more so, the kind of risk that people imagine when they equate AI to nukes is the devastating consequences of radioactive fallout from a nuclear exchange involving states with massive arsenals. In the case of AI, it would come as an intelligence explosion in which machines get exponentially smarter and turn against their human creators. This idea of nuclear risk is outdated, however, because we now know that we don’t actually need a lot of warheads to trigger catastrophic environmental effects. That is why Dan and I argue that, if we have to use a nuclear analogy for AI, it should be that of nuclear winter because it bears a greater similarity to the ways in which relatively ‘dumb’ AI is already causing cascading damages to our political, social, and environmental systems that we cannot fully comprehend yet. A more accurate framework would therefore to focus on existing harms, rather than speculative ones, that AI is already inflicting, and leverage a whole host of institutions, cooperative efforts, and legal frameworks to do so instead of putting our hope into one institution that governs the peaceful use of AI as we have it for nuclear technology in the form of the IAEA.

Your previous research highlights the use of death counts to measure the severity of conflicts. What are the advantages and limitations of using these numbers to understand war, and what other methods can give us a fuller picture?

Death counts are a central way of representing the damage that armed conflict and war inflict on societies. They are an important and powerful tool for organizations such as the United Nations to show that armed violence is a problem that kills an unfathomable amount of people, and in this way helps to make the case for preventive measures. However, I show in my research that death counts are often used as implicit or explicit thresholds, and in this way shape our ideas of which kind of violence counts as ‘conflict’ and ‘war’ and which doesn’t. In turn, where we focus on a high enough number of deaths for armed hostilities to warrant our attention, these deaths become aggregated and abstract. I therefore argue that, in addition to counting as powerful tools for advocacy, we need to make sure to contextualize those numbers with the lived experiences and circumstances of the deaths for which each of those numbers stands, so that we can better appreciate what it is that we intend to prevent.

What is something that people would be surprised to learn about you?

A fun fact about me is that I was a musician and sound engineer in a former life. Growing up, I learned to play a whole bunch of instruments (cello, piano, electric bass), all of which I played in various bands over the years, and I am trained in classical singing. Before switching to Political Science and International Relations, I studied sound engineering at the University of Music and Performing Arts in Vienna, Austria, where I learned everything from mixing on a 96-channel console, putting up microphones correctly, editing audio tracks, and operating a boom on a film set.