Why the Military Can’t Trust AI

Why the Military Can’t Trust AI

Large Language Models Can Make Bad Decisions - and Could Trigger Nuclear War

In 2022, OpenAI unveiled ChatGPT, a chatbot that uses large language models to mimic human conversations and to answer users’ questions. The chatbot’s extraordinary abilities sparked a debate about how LLMs might be used to perform other tasks—including fighting a war. Although there is research by some, including Professor Yvonne McDermott Rees at Swansea University, that demonstrates how generative AI technologies might be used to enforce discriminate and therefore ethical uses of force, others, such as advisers from the International Committee of the Red Cross, have warned that these technologies could remove human decision-making from the most vital questions of life and death.

The U.S. Department of Defense is now seriously investigating what LLMs can do for the military. In the spring of 2022, the DOD established the Chief Digital and Artificial Intelligence Office to explore how artificial intelligence can help the armed forces. In November 2023, the Defense Department released its strategy for adopting AI technologies. It optimistically reported that “the latest advancements in data, analytics, and AI technologies enable leaders to make better decisions faster, from the boardroom to the battlefield.” Accordingly, AI-enabled technologies are now being used. U.S. troops, for example, have had AI-enabled systems select Houthi targets in the Middle East.

Continue reading at foreignaffairs.com