Fellowship Spotlight: Anna-Katharina Ferl
Fellowship Spotlight: Anna-Katharina Ferl
A news feature highlighting the work of CISAC fellows
The Center for International Security and Cooperation (CISAC) offers a rich variety of fellowships that allow early-career scholars to focus on a variety of security topics and participate in seminars to interact and collaborate with leading faculty and researchers.
In this Q&A, CISAC fellow Anna-Katharina Ferl ediscusses how differing interpretations of 'meaningful human control' shape debates on autonomous weapons systems, human-machine relations in warfare and emerging policy visions for AI.
In your article “Imagining Meaningful Human Control: Autonomous Weapons and the (De-) Legitimisation of Future Warfare,” you highlight how different political actors interpret ‘meaningful human control’ in ways that either legitimize or delegitimize certain autonomous weapons systems (AWS). Could you elaborate on how these differing interpretations shape the kinds of regulatory or policy options that become politically viable in international forums like the United Nations (UN)?
Discussions on regulating autonomous weapons systems (AWS) have been ongoing for over a decade. While technological hurdles certainly exist in finding common ground among state parties at the United Nations—such as the lack of a commonly agreed-upon definition for AWS and the rapid expansion of artificial intelligence (AI) in recent years—I demonstrate that the socio-political interpretation of these technologies also matters for regulation. In my research, I draw on Science and Technology Studies (STS) to demonstrate that AWS, as sociotechnical systems, are not independent of political processes, but rather are co-constituted as governance objects within them. This may sound very academic, but it essentially means that we cannot understand or evaluate these technologies and their potential for regulation without considering how actors, such as diplomats and civil society organizations, understand and interpret them.
In this paper and in my more recent research, I investigate how these understandings and interpretations arise and why they make regulation so challenging. International law does not offer a definition of human control, and the development of AWS and broader AI applications in warfare poses new challenges to international law and arms control. I argue that these challenges stem not only from the strategic interests of actors, but also from the uncertain nature of technologies like AWS and AI. We do not know what meaningful human control would entail. Many have moved away from the term "human control" and are now raising questions about the responsible development and use of (military) AI. However, the same problems persist: diverging interpretations, the vagueness of international law, and how regulation should be approached. That's why I believe that starting by investigating how these different interpretations arise is key.
In investigating how the regulation of autonomous weapons are shaped not just by state interests, but also by the broader discourses and knowledge practices around these technologies, how do you see recent developments in AI reshaping the way we conceptualize human-machine relations in warfare? What implications does that have for future arms control efforts?
Three recent developments have fundamentally changed the conversation around AI and war. First, the introduction of large language models (LLMs), such as ChatGPT and Claude, has transformed the conversation about the capabilities of AI. For the first time, AI seemed to be more than a vague term and technology; it appeared to be a tangible and useful technology for the military. While this development has certainly created massive hype around the perceived promises of LLMs that have yet to be realized, organizations have become more confident in implementing AI technologies. In the military, this ranges from mundane logistics and personnel management tasks, where the advantages certainly outweigh the disadvantages, to critical applications, especially in the integration of AI technologies into use-of-force and targeting decisions and operations. However, because the spectrum of application is so wide and most AI technologies today are inherently dual use in nature (meaning they can have civilian as well as military uses) and get diffused from the civilian to the military realm, regulation through arms control in the traditional sense is almost impossible.
Second, the use of AI technologies in ongoing conflicts, such as the war in Ukraine, has demonstrated the advantages of implementing cutting-edge technologies. Ukraine has proven successful in deploying these technologies through a combination of technological developments, such as drone swarms, pattern and facial recognition algorithms, open-source intelligence, and additive manufacturing. Experimenting with different AI technologies on the battlefield has certainly played a major role in how we perceive the future role of AI in warfare. This is also evident in the Israeli Defense Forces' use of AI decision-support systems, such as Lavender and Gospel, in Gaza. Overall, there is a shift toward less human oversight, with decisions becoming increasingly distributed between humans and machines, making control more difficult to establish.
Third, broader geopolitical tensions and global competition over developing, implementing, and deploying AI for national security — especially between the United States and China — mean that a legally binding international agreement is much less likely today due to the fear that such an agreement could constrain strategic advantages. This makes nonproliferation efforts to curb the spread of these technologies beyond one’s own allies even more difficult. Rather than a comprehensive global regulatory framework, we are increasingly seeing a fragmented approach of political declarations, regional initiatives, and various national policies.
How do you see concepts like trust and human-centered AI shaping not just regulations, but the broader societal narratives and policy visions that governments adopt when framing AI across both security and civilian domains?
Human-centered AI has become a central concept for ensuring trust and responsibility in AI developments and policies. In a publication with two co-authors, we found that different actors — the European Union, China, and the United States — evoke human-centered design in policy documents to ensure functional control over AI. As a knowledge base, human-centered AI advances narratives of AI as a problem-solving tool whose anticipated risks are framed as manageable issues. These governmental visions center on AI as either a technological solution to larger socio-political problems or as a technical means, such as explainability, to ensure trustworthiness. These narratives have two things in common: they avoid hard regulation and reframe risks as design problems to be solved. They also have a functional rather than ethical or democratic understanding of AI technologies. While trust, responsibility, and safety are important characteristics of AI technologies, these concepts risk becoming empty buzzwords if they are not given meaning or acted upon.
In terms of success, which accomplishments are you most proud of?
I am proud to be the first person in my family to earn a Ph.D., and that I am now a postdoctoral fellow at CISAC. Now, my cousin also wants to attend graduate school. We often take these opportunities for granted, so I am immensely grateful to my family for supporting me through graduate school, and to my faculty mentors, Paul Edwards and Steve Luby, for trusting me with the SERI postdoctoral fellowship.
Recently, I have had the exciting and humbling experience of leading the efforts to organize the 2025 SERI Symposium. The conference brought together over 20 speakers and more than 100 participants from all around California to discuss the most pressing issues of our time. Creating this community of scholars has shown me that success encompasses more than academic publications; it also involves advancing scholarly and public discourse on the kind of world we want to live in.
What is something that people would be surprised to learn about you?
I am sometimes surprised by the fact that I am actually a very creative person who ended up in political science and academia by chance. Throughout high school, I planned to attend art school and study graphic design. I even interned at graphic design agencies. Even today, I paint and draw a lot to balance my academic life. During my postdoctoral fellowship, I took up ceramics classes as a way to counterbalance long days in front of the computer.