The Politics and Ethics of Nonproliferation
A lasting legacy of the Cold War is the continued existence of weapons of mass destruction--uniquely, nuclear arms. The context in which they exist has been drastically changed in the realm of international politics. Father Hehir will probe the changed context of proliferation, as he addresses the continuing ethical and strategic challenges inherited from the past and now reshaped in this century.
Drell Lecture Recording: NA
Drell Lecture Transcript:
Speaker's Biography: J. Bryan Hehir is the Parker Gilbert Montgomery Professor of the Practice of Religion and Public Life at Harvard University and the Secretary for Social Services and the President of Catholic Charities for the Archdiocese of Boston. Father Hehir's research focuses on ethics and foreign policy, and the role of religion on world politics and in American society. His writings include The Moral Measurement of War: A Tradition of Continuity and Change and Military Intervention and National Sovereignty.
Oak Lounge
Katrina: Redefining the Essence of Homeland Security
Paul Stockton is the associate provost at the Naval Postgraduate School in Monterey, California, and is director of its Center for Homeland Defense and Security. Stockton is the editor of Homeland Security (forthcoming from Oxford University Press in 2005). His research has appeared in Political Science Quarterly, International Security and Strategic Survey. He is co-editor of Reconstituting America's Defense: America's New National Security Strategy (1992). Stockton has also published an Adelphi Paper and has contributed chapters to a number of books, including James Lindsay and Randall Ripley, eds., U.S. Foreign Policy After the Cold War (1997). Stockton received a BA summa cum laude from Dartmouth College in 1976 and a PhD in government from Harvard University in 1986. He served from 1986-1989 as legislative assistant to U.S. Senator Daniel Patrick Moynihan. Stockton was awarded a postdoctoral fellowship for 1989-1990 by CISAC. In August 1990, he joined the faculty of the Naval Postgraduate School. From 1995 until 2000, he served as director of the NPS Center for Civil-Military Relations. From 2000-2001, he founded and served as the acting dean of the NPS School of International Graduate Studies. He was appointed associate provost in 2001.
Reuben W. Hills Conference Room
Beauty and Terror: Does Mathematics Have a Role to Play in Winning the Shadow War?
How do you stop a terrorist?
You can work hard: Post men and equipment at every street corner, every port, every bay, every slip of beach, every straight stretch of asphalt long enough to land a plane.
You will spend billions, and your lines will be thin. All you've done is build the "impregnable" Atlantic Sea Wall--which the Allies punched through in hours on D-Day.
You've got to work smarter, not harder.
The opening line of the Oscar-winning movie A Beautiful Mind is "Mathematicians won the war." During World War II, the mathematics underlying cryptography played an important role in military planning.
Thereafter came a new kind of war. After the first frosts descended in the Soviet East, perhaps $2 billion were spent in the development of Game Theory.
Now again we face a new kind of war. And we need a new kind of mathematics to fight it.
Since 2001, tremendous amounts of information have been gathered regarding terrorist cells and individuals potentially planning future attacks. There is now a pressing need to develop new mathematical and computational techniques to assist in the analysis of this information, both to quantify future threats and to quantify the effectiveness of counterterrorism operations and strategies. Concepts and techniques from mathematics--specifically, from Lattice Theory and Reflexive Theory--have already been applied to counterterrorism and homeland security problems. The following is a partial list of such problems.
1. Strategies for disrupting terrorist cells
2. Data analysis of terrorist activity
3. Border penetration and security
4. Terrorist cell formation
Jonathan Farley is a CISAC science fellow and a professor in the Department of Mathematics and Computer Science at the University of the West Indies, Jamaica. His work focuses on applying lattice theory and other branches of mathematics to problems in counterterrorism and homeland security.
In 2001-2002 he was one of four Americans to win a Fulbright Distinguished Scholar Award to the United Kingdom. In the calendar years 2003 and 2004 he taught as a professor in the Department of Applied Mathematics at the Massachusetts Institute of Technology. In 2004 he received the Harvard Foundation's Distinguished Scientist of the Year Award, a medal presented on behalf of the president of Harvard University for "outstanding achievements and contributions in the field of mathematics." The City of Cambridge, Mass., declared March 19, 2004, to be "Dr. Jonathan David Farley Day."
He obtained his doctorate in mathematics from Oxford University in 1995, after winning Oxford's highest mathematics awards, the Senior Mathematical Prize and Johnson University Prize, in 1994. He graduated summa cum laude from Harvard University in 1991 with the second highest average in his graduating class.
Farley's work includes the solution of a problem posed by universal algebraist George Gratzer that remained unsolved for 34 years, and the solution (published in 2005) of a problem posed in 1981 by MIT mathematics professor Richard Stanley.
Reuben W. Hills Conference Room
The Antiplague System of the Former Soviet Union
The USSR's anti-plague system had four main responsibilities: monitor natural foci of endemic dread diseases such as plague, tularemia, anthrax, and Crimean-Congo hemorrhagic fever; protect the nation from imported exotic diseases (e.g., cholera and smallpox); protect the nation from biological warfare; and perform tasks for the Soviet offensive biological weapons program. Although the anti-plague system appears to have had successes in public health, its work undoubtedly was compromised by excessive secrecy, which led to anti-plague scientists having to overcome substantial barriers before being able to communicate with colleagues in other Soviet public health agencies, publish the results of their work, and undertake travel to non-socialist countries. This system disintegrated after December 1991, but was resurrected as elements of the newly independent states' health systems.
Reporting on the findings of a recently concluded project carried out by the Center for Nonproliferation Studies (CNS), I will discuss: (1) the threats that the anti-plague systems' human resources, pathogen culture collections, and equipment pose to international security; (2) the promises these systems hold, should they regain their former level of scientific/technical capability, for enhancing international public health; and (3) current activities by U.S. government agencies to lessen the security and safety threats of these systems and, simultaneously, increase their public health capabilities. As appropriate, I will illustrate the presentation with photos taken by CNS personnel in the course of having visited more than 40 anti-plague institutes and stations.
Dr. Raymond Zilinskas worked as a clinical microbiologist for 16 years, after graduating from California State University at Northridge with a BA in Biology, and from University of Stockholm with a Filosofie Kandidat in Organic Chemistry. He then commenced graduate studies at the University of Southern California. His dissertation addressed policy issues generated by recombinant DNA research, including the applicability of genetic engineering techniques for military and terrorist purposes. After earning a PhD, Dr. Zilinskas worked at the U.S. Office of Technology Assessment (1981-1982), United Nations Industrial Development Organization (1982-1986), and University of Maryland Biotechnology Institute (UMBI) (1987-1998). In addition, he was an Adjunct Associate Professor at the Department of International Health, School of Hygiene and Public Health, Johns Hopkins University, until 1999.
In 1993, Dr. Zilinskas was appointed William Foster Fellow at the U.S. Arms Control and Disarmament Agency (ACDA), where he worked on biological and toxin warfare issues. In 1994, ACDA seconded Dr. Zilinskas to the United Nations Special Commission (UNSCOM), where he worked as a biological analyst for seven months. He participated in two biological warfare-related inspections in Iraq (June and October 1994) encompassing 61 biological research and production facilities. He set up a database containing data about key dual-use biological equipment in Iraq and developed a protocol for UNSCOM's on-going monitoring and verification program in the biological field.
After the fellowship, Dr. Zilinskas returned to the UMBI and Johns Hopkins University. In addition, he continued to serve as a long-term consultant to ACDA (now part of the U.S. Department of State), for which he carried out studies on Cuban allegations of U.S. biological attacks against its people, animals, and plants and investigations carried out by the United Nations of chemical warfare in Southeast Asia and the Arabian Gulf region. Dr. Zilinskas also is a consultant to the U.S. Department of Defense.
In September 1998, Dr. Zilinskas was appointed Senior Scientist at the Center for Nonproliferation Studies (CNS), Monterey Institute of International Studies. On September 1, 2002, he was promoted to the Director of the Chemical and Biological Weapons Nonproliferation Program at the CNS. His research focuses on achieving effective biological arms control, assessing the proliferation potential of the former Soviet Union's biological warfare program, and meeting the threat of bioterrorism. Dr. Zilinskas' book Biological Warfare: Modern Offense and Defense, a definitive account on how modern biotechnology has qualitatively changed developments related to biological weapons and defense, was published in 1999. In 2005, the important reference work Encyclopedia of Bioterrorism Defense, which is co-edited by Richard Pilch and Dr. Zilinskas, was published by Wiley. He currently is writing a book on the former Soviet Union's biological warfare program, including its history, organization, accomplishments, and proliferation potential, which will be published in 2006.
Reuben W. Hills Conference Room, East 207, Encina Hall
If It's Broke, Don't Fix It
When Norwegian and U.S. scientists launched the Black Brant XII sounding rocket from a small island off Norway's northwest coast on January 25, 1995, they intended for it to harmlessly collect scientific data about the Northern Lights. But when Russia's early warning system radars detected the rocket, they generated an alarm that entered the nuclear forces command and control system and reached the highest levels of government. An accidental nuclear war was never a possibility--by the time the alarm reached Russian President Boris Yeltsin, the rocket had been properly identified--but the incident clearly demonstrated the dangers of a launch-on-warning posture.
A Cold War hangover, launch-on-warning was designed to provide additional protection to nuclear forces by ensuring that a retaliatory attack could be initiated before a first strike obliterated its targets. Implementing launch-on-warning required substantial investment into a network of early warning radars and satellites--plus a command and control system that would allow missiles to be on constant "hair-trigger alert." Its cost proved high enough that only two nuclear powers--the United States and Soviet Union--established a launch-on-warning capability. Nearly 15 years after the Soviet Union's collapse, neither the United States nor Russia have abandoned it.
Numerous proposals have tried to address launch-on-warning concerns. Most point to the Black Brant XII incident as evidence that the precipitous decline of the Russian early warning and command and control systems is the main problem. The argument is simple: If the early warning system was unreliable a decade ago when it was in relatively good shape, imagine how bad the situation is today, after years of decline. Accordingly, many believe the remedy lies in helping Russia compensate for the disrepair, either by creating arrangements that would allow Russia and the United States to share their early warning data, or by providing direct assistance to Russia that would allow it to upgrade its system. These proposals are misguided. Repairing the Russian early warning system would actually increase the danger of an accidental launch.
The reason for this is that the role of the Russian early warning system today is marginal at best. Even in its prime, the system could not provide the data necessary for a launch-on-warning strike. The radar network has always had serious gaps in coverage and the space-based segment of the system was not designed to detect sea-launched missiles. In addition to this, a series of problems plagued the system during its development and early deployment stages. As a result, the Soviet military learned to regard the alarms it generated with suspicion.
The system's deterioration has only added to doubts about its ability to provide a reliable warning. The breakup of the Soviet Union left most of the radars outside Russian territory. At present, Russia operates only three early warning satellites, while minimally reliable coverage of U.S. territory requires at least five. No second-generation satellites, which would expand coverage to the oceans, are operational today. This leaves Russia with an early warning system it can't really trust.
The lack of trust is exactly the reason why the decline of the system is much less dangerous that it may seem. The continued disrepair erodes confidence in the system's performance further and makes it much less likely that an alarm (whether real or false) would be acted upon. Attempts to repair or upgrade the system, on the other hand, would only increase the danger of miscalculation, since such actions would introduce new elements into an already complex system and boost confidence in its performance.
By the same logic, the United States should not be complacent about its early warning system simply because it is thought to be more robust and reliable than its Russian counterpart. High confidence in the U.S. system could make a technical malfunction--should one ever occur--an extremely dangerous event, since U.S. operators would be unlikely to question the information provided by the system.
The best way to deal with the dangers of accidental launch is to remove missiles from hair-trigger alert, for example by introducing physical barriers that would prevent a launch on warning. Technical solutions that have been suggested include removing warheads from missiles or limiting submarine patrol areas. None of these measures have been implemented, since they involve intrusive and cumbersome verification provisions that neither side is willing to accept. What these proposals don't take into account though is that the main goal of de-alerting--reducing the risk of accidental launch--does not require transparency or verification. If a missile does not have a warhead, it won't be able to leave a silo regardless of whether or not one can verify it. In this respect de-alerting is quite different from disarmament, where verification rightfully belongs.
Moreover, transparency could make de-alerting potentially dangerous. Reducing a missile's readiness for all the world to see could create instability during a crisis. If one country decides to bring its missiles back into operation, its counterpart might feel the need to do the same lest its forces remain unprepared for a launch. This might create a rush to re-alert forces, and the dangers associated with re-alerting could outweigh any de-alerting benefits. Ideally, de-alerting measures should be completely undetectable. This approach would remove missiles from the launch-on-warning equation while minimizing the instabilities associated with re-alerting.
With the verification hurdle removed, there is no reason why the United States and Russia should not make a public commitment to de-alert their strategic arsenals. They don't even need to do it together. De-alerting is beneficial even when done unilaterally. Of course, there will be plenty of questions about the value of commitments that are neither enforceable nor verifiable. But the value would be quite real--thousands of missiles would no longer be on hair-trigger alert. And the next time Norway launches a scientific sounding rocket, we can all breathe a little easier.
Censoring Science Won't Make Us Any Safer
In 1920, the Irish Republican Army reportedly considered a terrifying new weapon: typhoid-contaminated milk. Reading from an IRA memo he claimed had been captured in a recent raid, Sir Hamar Greenwood described to Parliament the ease with which "fresh and virulent cultures" could be obtained and introduced into milk served to British soldiers. Although the plot would only target the military, the memo expressed concern that the disease might spread to the general population.
Although the IRA never used this weapon, the incident illustrates that poisoning a nation's milk supply with biological agents hardly ranks as a new concept. Yet just two weeks ago, the National Academy of Sciences' journal suspended publication of an article analyzing the vulnerability of the U.S. milk supply to botulinum toxin, because the Department of Health and Human Services warned that information in the article provided a "road map for terrorists."
That approach may sound reasonable, but the effort to suppress scientific information reflects a dangerously outdated attitude. Today, information relating to microbiology is widely and instantly available, from the Internet to high school textbooks to doctoral theses. Our best defense against those who would use it as a weapon is to ensure that our own scientists have better information. That means encouraging publication.
The article in question, written by Stanford University professor Lawrence Wein and graduate student Yifan Liu, describes a theoretical terrorist who obtains a few grams of botulinum toxin on the black market and pours it into an unlocked milk tank. Transferred to giant dairy silos, the toxin contaminates a much larger supply. Because even a millionth of a gram may be enough to kill an adult, hundreds of thousands of people die. (Wein summarized the article in an op-ed he wrote for the New York Times.) The scenario is frightening, and it is meant to be -- the authors want the dairy industry and its federal regulators to take defensive action.
The national academy's suspension of the article reflects an increasing concern that publication of sensitive data can provide terrorists with a how-to manual, but it also brings to the fore an increasing anxiety in the scientific community that curbing the dissemination of research may impair our ability to counter biological threats. This dilemma reached national prominence in fall 2001, when 9/11 and the anthrax mailings drew attention to another controversial article. This one came from a team of Australian scientists.
Approximately every four years, Australia suffers a mouse infestation. In 1998, scientists in Canberra began examining the feasibility of using a highly contagious disease, mousepox, to alter the rodents' ability to reproduce. Their experiments yielded surprising results. Researchers working with mice naturally resistant to the disease found that combining a gene from the rodent's immune system (interleukin-4) with the pox virus and inserting the pathogen into the animals killed them -- all of them. Plus 60 percent of the mice not naturally resistant who had been vaccinated against mousepox.
In February 2001 the American SocietyforMicrobiologists' (ASM) Journal of Virology reported the findings. Alarm ensued. The mousepox virus is closely related to smallpox -- one of the most dangerous pathogens known to humans. And the rudimentary nature of the experiment demonstrated how even basic, inexpensive microbiology can yield devastating results.
When the anthrax attacks burst into the news seven months later, the mousepox case became a lightning rod for deep-seated fears about biological weapons. The Economist reported rumors about the White House pressuring American microbiology journals to restrict publication of similar pieces. Samuel Kaplan, chair of the ASM publications board, convened a meeting of the editors in chief of the ASM's nine primary journals and two review journals. Hoping to head off government censorship, the organization -- while affirming its earlier decision -- ordered its peer reviewers to take national security and the society's code of ethics into account.
Not only publications came under pressure, but research itself. In spring 2002 the newly formed Department of Homeland Security developed an information-security policy to prevent certain foreign nationals from gaining access to a range of experimental data. New federal regulations required that particular universities and laboratories submit to unannounced inspections, register their supplies and obtain security clearances. Legislation required that all genetic engineering experiments be cleared by the government.
On the mousepox front, however, important developments were transpiring. Because the Australian research had entered the public domain, scientists around the world began working on the problem. In November 2003, St. Louis University announced an effective medical defense against a pathogen similar to -- but even more deadly than -- the one created in Australia. This result would undoubtedly not have been achieved, or at least not as quickly, without the attention drawn by the ASM article.
The dissemination of nuclear technology presents an obvious comparison. The 1946 Atomic Energy Act classifies nuclear information "from birth." Strong arguments can be made in favor of such restrictions: The science involved in the construction of the bomb was complex and its application primarily limited to weapons. A short-term monopoly was possible. Secrecy bought the United States time to establish an international nonproliferation regime. And little public good would have been achieved by making the information widely available.
Biological information and the issues surrounding it are different. It is not possible to establish even a limited monopoly over microbiology. The field is too fundamental to the improvement of global public health, and too central to the development of important industries such as pharmaceuticals and plastics, to be isolated. Moreover, the list of diseases that pose a threat ranges from high-end bugs, like smallpox, to common viruses, such as influenza. Where does one draw the line for national security?
Experience suggests that the government errs on the side of caution. In 1951, the Invention Secrecy Act gave the government the authority to suppress any design it deemed detrimental to national defense. Certain areas of research-- atomic energy and cryptography -- consistently fell within its purview. But the state also placed secrecy orders on aspects of cold fusion, space technology, radar missile systems, citizens band radio voice scramblers, optical engineering and vacuum technology. Such caution, in the microbiology realm, may yield devastating results. It is not in the national interest to stunt research into biological threats.
In fact, the more likely menace comes from naturally occurring diseases. In 1918 a natural outbreak of the flu infected one-fifth of the world's population and 25 percent of the United States'. Within two years it killed more than 650,000 Americans, resulting in a 10-year drop in average lifespan. Despite constant research into emerging strains, the American Lung Association estimates that the flu and related complications kill 36,000 Americans each year. Another 5,000 die annually from food-borne pathogens -- an extraordinarily large number of which have no known cure. The science involved in responding to these diseases is incremental, meaning that small steps taken by individual laboratories around the world need to be shared for larger progress to be made.
The idea that scientific freedom strengthens national security is not new. In the early 1980s, a joint Panel on Scientific Communication and National Security concluded security by secrecywasuntenable. Its report called instead for security by accomplishment -- ensuring strength through advancing research. Ironically, one of the three major institutions participating was the National Academy of Sciences -- the body that suspended publication of the milk article earlier this month.
The government has a vested interest in creating a public conversation about ways in which our society is vulnerable to attack. Citizens are entitled to know when their milk, their water, their bridges, their hospitals lack security precautions. If discussion of these issues is censored, the state and private industry come under less pressure to alter behavior; indeed, powerful private interests may actively lobby against having to install expensive protections. And failure to act may be deadly.
Terrorists will obtain knowledge. Our best option is to blunt their efforts to exploit it. That means developing, producing and stockpiling effective vaccines. It means funding research into biosensors -- devices that detect the presence of toxic substances in the environment -- and creating more effective reporting requirements for early identification of disease outbreaks. And it means strengthening our public health system.
For better or worse, the cat is out of the bag -- something brought home to me last weekend when I visited the Tech Museum of Innovation in San Jose. One hands-on exhibit allowed children to transfer genetic material from one species to another. I watched a 4-year-old girl take a red test tube whose contents included a gene that makes certain jellyfish glow green. Using a pipette, she transferred the material to a blue test tube containing bacteria. She cooled the solution, then heated it, allowing the gene to enter the bacteria. Following instructions on a touch-screen computer, she transferred the contents to a petri dish, wrote her name on the bottom, and placed the dish in an incubator. The next day, she could log on to a Web site to view her experiment, and see her bacteria glowing a genetically modified green.
In other words, the pre-kindergartener (with a great deal of help from the museum) had conducted an experiment that echoed the Australian mousepox study. Obviously, this is not something the child could do in her basement. But just as obviously, the state of public knowledge is long past anyone's ability to censor it.
Allowing potentially harmful information to enter the public domain flies in the face of our traditional way of thinking about national security threats. But we have entered a new world. Keeping scientists from sharing information damages our ability to respond to terrorism and to natural disease, which is more likely and just as devastating. Our best hope to head off both threats may well be to stay one step ahead.
Barracks and Brothels: Peacekeepers and Human Trafficking in the Balkans
Dr. Sarah Mendelson is a senior fellow with the Russia and Eurasia Program at the Center for Strategic and International Studies. Before joining CSIS in 2001, she taught international politics at the Fletcher School of Law and Diplomacy, Tufts University. Dr. Mendelson received her B.A. in history from Yale University, and her Ph.D. in political science from Columbia University. She also earned a certificate from the Harriman Institute.
At CSIS, she manages several projects that explore the links between security and human rights. Her current research includes collaborative work on public opinion surveys of Russian attitudes on democracy, human rights, Chechnya and the military. She directed a collaborative study evaluating the impact of Western democracy assistance to Eastern Europe and Eurasia. In addition, she has served on the staff of the National Democratic Institute's Moscow office, and was a resident associate at the Carnegie Endowment for International Peace. She has also been a fellow at CISAC and at Princeton University's Center of International Studies.
Dr. Mendelson serves on the steering committee of Human Rights Watch, the editorial board of International Security, and is a member of the Council on Foreign Relations, and the Program on New Approaches to Russian Security.
Reuben W. Hills Conference Room, East 207, Encina Hall
The Politics, Ethics, and Law of Racially- Targeted Biological Weapons
Tonya Putnam has a J.D. from Harvard Law School and received her Ph.D. from the Department of Political Science at Stanford University in March 2005. She is currently a postdoctoral fellow at CISAC, but will be moving to the Center on Globalization and Governance at Princeton University for a postdoctoral fellowship next academic year. Her dissertation, Courts Without Borders? The Politics and Law of Extraterritorial Regulation, explores the extraterritorial reach of U.S. federal courts and regulatory institutions, and implications for the development of de facto international regulatory frameworks. Other research areas have included human rights in peace implementation missions, comparative legal responses to the threat of cybercrime and cyberterrorism, risk communication in the context of radiological terrorism (dirty bombs), and obstacles to military reform in Russia.
Reuben W. Hills Conference Room, East 207, Encina Hall
Tonya Putnam
Tonya L. Putnam (J.D./Ph.D) is a Research Scholar at the Arnold A. Salzman Institute of War and Peace Studies at Columbia University. From 2007 to 2020 she was a member of the Political Science at Columbia University. Tonya’s work engages a variety of topics related to international relations and international law with emphasis on issues related to jurisdiction and jurisdictional overlaps in international regulatory and security matters. She is the author of Courts Without Borders: Law, Politics, and U.S. Extraterritoriality along with several articles in International Organization, International Security, and the Human Rights Review. She is also a member (inactive) of the California State Bar.