Science and Technology
-

Zia Mian, a research assistant with the Program on Science and Global Security (PS&GS) at Princeton University and lecturer of public and international affairs at the Woodrow Wilson School, has been with PS&GS since 1997. His interests include nuclear weapons and nuclear energy programs in South Asia, and finding alternative policies that can contribute to disarmament and sustainable development. With Dr. Pervez Hoodbhoy, Mian co-produced Crossing the Lines, a documentary film about India, Pakistan, and the battle over Kashmir, which was shown at CISAC this past summer. He has edited and co-edited a number of books on South Asia, including Out of the Nuclear Shadow (co-edited with Smitu Kothari; Zed Press, London and Rainbow Press, New Delhi, 2001). Mian has also co-edited a volume with Iftikhar Ahmad and Dohra Ahmad, Between Past and Future: Selected Essays on Pakistan by Eqbal Ahmad (Oxford University Press, Karachi).

Reuben W. Hills Conference Room

Zia Mian Research Assistant, Program on Science and Global Security, and Lecturer, Woodrow Wilson School of Public and International Affairs Speaker Princeton University
Seminars
-

The USSR's anti-plague system had four main responsibilities: monitor natural foci of endemic dread diseases such as plague, tularemia, anthrax, and Crimean-Congo hemorrhagic fever; protect the nation from imported exotic diseases (e.g., cholera and smallpox); protect the nation from biological warfare; and perform tasks for the Soviet offensive biological weapons program. Although the anti-plague system appears to have had successes in public health, its work undoubtedly was compromised by excessive secrecy, which led to anti-plague scientists having to overcome substantial barriers before being able to communicate with colleagues in other Soviet public health agencies, publish the results of their work, and undertake travel to non-socialist countries. This system disintegrated after December 1991, but was resurrected as elements of the newly independent states' health systems.

Reporting on the findings of a recently concluded project carried out by the Center for Nonproliferation Studies (CNS), I will discuss: (1) the threats that the anti-plague systems' human resources, pathogen culture collections, and equipment pose to international security; (2) the promises these systems hold, should they regain their former level of scientific/technical capability, for enhancing international public health; and (3) current activities by U.S. government agencies to lessen the security and safety threats of these systems and, simultaneously, increase their public health capabilities. As appropriate, I will illustrate the presentation with photos taken by CNS personnel in the course of having visited more than 40 anti-plague institutes and stations.

Dr. Raymond Zilinskas worked as a clinical microbiologist for 16 years, after graduating from California State University at Northridge with a BA in Biology, and from University of Stockholm with a Filosofie Kandidat in Organic Chemistry. He then commenced graduate studies at the University of Southern California. His dissertation addressed policy issues generated by recombinant DNA research, including the applicability of genetic engineering techniques for military and terrorist purposes. After earning a PhD, Dr. Zilinskas worked at the U.S. Office of Technology Assessment (1981-1982), United Nations Industrial Development Organization (1982-1986), and University of Maryland Biotechnology Institute (UMBI) (1987-1998). In addition, he was an Adjunct Associate Professor at the Department of International Health, School of Hygiene and Public Health, Johns Hopkins University, until 1999.

In 1993, Dr. Zilinskas was appointed William Foster Fellow at the U.S. Arms Control and Disarmament Agency (ACDA), where he worked on biological and toxin warfare issues. In 1994, ACDA seconded Dr. Zilinskas to the United Nations Special Commission (UNSCOM), where he worked as a biological analyst for seven months. He participated in two biological warfare-related inspections in Iraq (June and October 1994) encompassing 61 biological research and production facilities. He set up a database containing data about key dual-use biological equipment in Iraq and developed a protocol for UNSCOM's on-going monitoring and verification program in the biological field.

After the fellowship, Dr. Zilinskas returned to the UMBI and Johns Hopkins University. In addition, he continued to serve as a long-term consultant to ACDA (now part of the U.S. Department of State), for which he carried out studies on Cuban allegations of U.S. biological attacks against its people, animals, and plants and investigations carried out by the United Nations of chemical warfare in Southeast Asia and the Arabian Gulf region. Dr. Zilinskas also is a consultant to the U.S. Department of Defense.

In September 1998, Dr. Zilinskas was appointed Senior Scientist at the Center for Nonproliferation Studies (CNS), Monterey Institute of International Studies. On September 1, 2002, he was promoted to the Director of the Chemical and Biological Weapons Nonproliferation Program at the CNS. His research focuses on achieving effective biological arms control, assessing the proliferation potential of the former Soviet Union's biological warfare program, and meeting the threat of bioterrorism. Dr. Zilinskas' book Biological Warfare: Modern Offense and Defense, a definitive account on how modern biotechnology has qualitatively changed developments related to biological weapons and defense, was published in 1999. In 2005, the important reference work Encyclopedia of Bioterrorism Defense, which is co-edited by Richard Pilch and Dr. Zilinskas, was published by Wiley. He currently is writing a book on the former Soviet Union's biological warfare program, including its history, organization, accomplishments, and proliferation potential, which will be published in 2006.

Reuben W. Hills Conference Room, East 207, Encina Hall

Ray Zilinskas Director, Chemical and Biological Weapons Nonproliferation Program Speaker Center for Nonproliferation Studies, Monterey Institute
Seminars
-

This talk is based on chapter 4 of the speaker's dissertation, "North Korea," provided in the link below.

Alexander H. Montgomery is a post-doctoral fellow at the Center for International Security and Cooperation, Stanford University. He has a BA in physics from the University of Chicago, an MA in energy and resources from the University of California, Berkeley, an MA in sociology from Stanford University, and will be receiving his PhD in political science from Stanford University in fall 2005. He has worked as a research associate in high energy physics on the BaBar experiment at Lawrence Berkeley National Laboratory and as a graduate research assistant at the Center for International Security Affairs at Los Alamos National Laboratory. His research interests include political organizations, weapons of mass disruption and destruction, social studies of technology, and interstate social relations. His dissertation was on post-Cold War U.S. counterproliferation policy, evaluating the efficacy of policies towards North Korea, Iran, and proliferation networks.

Reuben W. Hills Conference Room, East 207 Encina Hall

Alex Montgomery Postdoctoral Fellow Speaker CISAC; PhD, Department of Political Science, Stanford
Seminars
-

Allen S. Weiner examines to what degree the global "war on terror" that has erupted since September 11, 2001 fits the "just war" doctrine of international relations or even whether it can properly be considered a war at all in terms of positive international law. Whether or not these labels apply is not merely a matter of academic debate, Weiner notes, but has broader implications for the international legal responsibilities of the United States in Afghanistan, Iraq and other theaters of the "war on terror

Reuben W. Hills Conference Room, East 207 Encina Hall

Stanford Law School
559 Nathan Abbott Way
Neukom Faculty Office Building, Room N238
Stanford, CA 94305-8610

(650) 724-5892 (650) 725-2592
0
Senior Lecturer in Law
Director, Stanford Program in International Law
Co-Director, Stanford Center on International Conflict and Negotiation
CISAC Core Faculty Member
Europe Center Affiliated Faculty
rsd25_073_0376a.jpg JD

Allen S. Weiner is senior lecturer in law and director of the Stanford Program in International Law at Stanford Law School. He is also the co-director of the Stanford Center on International Conflict and Negotiation. He is an international legal scholar with expertise in such wide-ranging fields as international and national security law, the law of war, international conflict resolution, and international criminal law (including transitional justice). His scholarship focuses on international law and the response to the contemporary security threats of international terrorism, the proliferation of weapons of mass destruction, and situations of widespread humanitarian atrocities. He also explores the relationship between international and domestic law in the context of asymmetric armed conflicts between the United States and nonstate groups and the response to terrorism. In the realm of international conflict resolution, his highly multidisciplinary work analyzes the barriers to resolving violent political conflicts, with a particular focus on the Israeli-Palestinian conflict. Weiner’s scholarship is deeply informed by experience; for more than a decade he practiced international law in the U.S. Department of State, serving as an attorney-adviser in the Office of the Legal Adviser and as legal counselor at the U.S. Embassy in The Hague. In those capacities, he advised government policy-makers, negotiated international agreements, and represented the United States in litigation before the Iran-United States Claims Tribunal, the International Criminal Tribunal for the Former Yugoslavia, and the International Court of Justice. He teaches courses in public international law, international conflict resolution, and international security matters at Stanford Law School.

Weiner is the author of "Constitutions as Peace Treaties: A Cautionary Tale for the Arab Spring” in the Stanford Law Review Online (2011) and co-author (with Barry E. Carter) of International Law (6th ed. 2011). Other publications include “The Torture Memos and Accountability" in the American Society of International Law Insight (2009), "Law, Just War, and the International Fight Against Terrorism: Is It War?", in Intervention, Terrorism, and Torture: Contemporary Challenges to Just War Theory (Steven P. Lee, ed.) (2007), ”Enhancing Implementation of U.N. Security Council Resolution 1540: Report of the Center on International Security and Cooperation” (with Chaim Braun, Michael May & Roger Speed) (September 2007), and "The Use of Force and Contemporary Security Threats: Old Medicine for New Ills?", Stanford Law Review (2006).

Weiner has worked on several Supreme Court amicus briefs concerning national security and international law issues, including cases brought involving "war on terror" detainees.  He has also submitted petitions before the United Nations Working Group on Arbitrary Detention on behalf of Vietnamese social and political activists detained by their governing for the exercise of free speech rights.

Weiner earned a BA from Harvard College and a JD from Stanford Law School.

CV
Date Label
Allen Weiner Warren Christopher Professor of the Practice of International Law and Diplomacy Speaker FSI; Stanford Law School
Seminars
Paragraphs

Concepts and techniques from mathematics--specifically, from lattice theory and reflexive theory--have already been applied to counterterrorism and computer security problems. The following is a partial list of such problems:

  1. Strategies for disrupting terrorist cells
  2. Data analysis of terrorist activity
  3. Border penetration and security
  4. Terrorist cell formation
  5. Information security

This article proposes the creation of a European Institute for Mathematical Methods in Counterterrorism (IMMC), to be based in Austria. Such an institute would require minimal investment but could serve as a catalyst to draw several million euros in research grants and contracts to Austria. This influx of funding would benefit not merely scientists and firms working in homeland security, but other aspects of Austrian science as well.

All Publications button
1
Publication Type
Journal Articles
Publication Date
Journal Publisher
Bridges
Authors
Paragraphs

When Norwegian and U.S. scientists launched the Black Brant XII sounding rocket from a small island off Norway's northwest coast on January 25, 1995, they intended for it to harmlessly collect scientific data about the Northern Lights. But when Russia's early warning system radars detected the rocket, they generated an alarm that entered the nuclear forces command and control system and reached the highest levels of government. An accidental nuclear war was never a possibility--by the time the alarm reached Russian President Boris Yeltsin, the rocket had been properly identified--but the incident clearly demonstrated the dangers of a launch-on-warning posture.

A Cold War hangover, launch-on-warning was designed to provide additional protection to nuclear forces by ensuring that a retaliatory attack could be initiated before a first strike obliterated its targets. Implementing launch-on-warning required substantial investment into a network of early warning radars and satellites--plus a command and control system that would allow missiles to be on constant "hair-trigger alert." Its cost proved high enough that only two nuclear powers--the United States and Soviet Union--established a launch-on-warning capability. Nearly 15 years after the Soviet Union's collapse, neither the United States nor Russia have abandoned it.

Numerous proposals have tried to address launch-on-warning concerns. Most point to the Black Brant XII incident as evidence that the precipitous decline of the Russian early warning and command and control systems is the main problem. The argument is simple: If the early warning system was unreliable a decade ago when it was in relatively good shape, imagine how bad the situation is today, after years of decline. Accordingly, many believe the remedy lies in helping Russia compensate for the disrepair, either by creating arrangements that would allow Russia and the United States to share their early warning data, or by providing direct assistance to Russia that would allow it to upgrade its system. These proposals are misguided. Repairing the Russian early warning system would actually increase the danger of an accidental launch.

The reason for this is that the role of the Russian early warning system today is marginal at best. Even in its prime, the system could not provide the data necessary for a launch-on-warning strike. The radar network has always had serious gaps in coverage and the space-based segment of the system was not designed to detect sea-launched missiles. In addition to this, a series of problems plagued the system during its development and early deployment stages. As a result, the Soviet military learned to regard the alarms it generated with suspicion.

The system's deterioration has only added to doubts about its ability to provide a reliable warning. The breakup of the Soviet Union left most of the radars outside Russian territory. At present, Russia operates only three early warning satellites, while minimally reliable coverage of U.S. territory requires at least five. No second-generation satellites, which would expand coverage to the oceans, are operational today. This leaves Russia with an early warning system it can't really trust.

The lack of trust is exactly the reason why the decline of the system is much less dangerous that it may seem. The continued disrepair erodes confidence in the system's performance further and makes it much less likely that an alarm (whether real or false) would be acted upon. Attempts to repair or upgrade the system, on the other hand, would only increase the danger of miscalculation, since such actions would introduce new elements into an already complex system and boost confidence in its performance.

By the same logic, the United States should not be complacent about its early warning system simply because it is thought to be more robust and reliable than its Russian counterpart. High confidence in the U.S. system could make a technical malfunction--should one ever occur--an extremely dangerous event, since U.S. operators would be unlikely to question the information provided by the system.

The best way to deal with the dangers of accidental launch is to remove missiles from hair-trigger alert, for example by introducing physical barriers that would prevent a launch on warning. Technical solutions that have been suggested include removing warheads from missiles or limiting submarine patrol areas. None of these measures have been implemented, since they involve intrusive and cumbersome verification provisions that neither side is willing to accept. What these proposals don't take into account though is that the main goal of de-alerting--reducing the risk of accidental launch--does not require transparency or verification. If a missile does not have a warhead, it won't be able to leave a silo regardless of whether or not one can verify it. In this respect de-alerting is quite different from disarmament, where verification rightfully belongs.

Moreover, transparency could make de-alerting potentially dangerous. Reducing a missile's readiness for all the world to see could create instability during a crisis. If one country decides to bring its missiles back into operation, its counterpart might feel the need to do the same lest its forces remain unprepared for a launch. This might create a rush to re-alert forces, and the dangers associated with re-alerting could outweigh any de-alerting benefits. Ideally, de-alerting measures should be completely undetectable. This approach would remove missiles from the launch-on-warning equation while minimizing the instabilities associated with re-alerting.

With the verification hurdle removed, there is no reason why the United States and Russia should not make a public commitment to de-alert their strategic arsenals. They don't even need to do it together. De-alerting is beneficial even when done unilaterally. Of course, there will be plenty of questions about the value of commitments that are neither enforceable nor verifiable. But the value would be quite real--thousands of missiles would no longer be on hair-trigger alert. And the next time Norway launches a scientific sounding rocket, we can all breathe a little easier.

All Publications button
1
Publication Type
Journal Articles
Publication Date
Journal Publisher
Bulletin of the Atomic Scientists
Authors
Pavel Podvig
Paragraphs

In 1920, the Irish Republican Army reportedly considered a terrifying new weapon: typhoid-contaminated milk. Reading from an IRA memo he claimed had been captured in a recent raid, Sir Hamar Greenwood described to Parliament the ease with which "fresh and virulent cultures" could be obtained and introduced into milk served to British soldiers. Although the plot would only target the military, the memo expressed concern that the disease might spread to the general population.

Although the IRA never used this weapon, the incident illustrates that poisoning a nation's milk supply with biological agents hardly ranks as a new concept. Yet just two weeks ago, the National Academy of Sciences' journal suspended publication of an article analyzing the vulnerability of the U.S. milk supply to botulinum toxin, because the Department of Health and Human Services warned that information in the article provided a "road map for terrorists."

That approach may sound reasonable, but the effort to suppress scientific information reflects a dangerously outdated attitude. Today, information relating to microbiology is widely and instantly available, from the Internet to high school textbooks to doctoral theses. Our best defense against those who would use it as a weapon is to ensure that our own scientists have better information. That means encouraging publication.

The article in question, written by Stanford University professor Lawrence Wein and graduate student Yifan Liu, describes a theoretical terrorist who obtains a few grams of botulinum toxin on the black market and pours it into an unlocked milk tank. Transferred to giant dairy silos, the toxin contaminates a much larger supply. Because even a millionth of a gram may be enough to kill an adult, hundreds of thousands of people die. (Wein summarized the article in an op-ed he wrote for the New York Times.) The scenario is frightening, and it is meant to be -- the authors want the dairy industry and its federal regulators to take defensive action.

The national academy's suspension of the article reflects an increasing concern that publication of sensitive data can provide terrorists with a how-to manual, but it also brings to the fore an increasing anxiety in the scientific community that curbing the dissemination of research may impair our ability to counter biological threats. This dilemma reached national prominence in fall 2001, when 9/11 and the anthrax mailings drew attention to another controversial article. This one came from a team of Australian scientists.

Approximately every four years, Australia suffers a mouse infestation. In 1998, scientists in Canberra began examining the feasibility of using a highly contagious disease, mousepox, to alter the rodents' ability to reproduce. Their experiments yielded surprising results. Researchers working with mice naturally resistant to the disease found that combining a gene from the rodent's immune system (interleukin-4) with the pox virus and inserting the pathogen into the animals killed them -- all of them. Plus 60 percent of the mice not naturally resistant who had been vaccinated against mousepox.

In February 2001 the American SocietyforMicrobiologists' (ASM) Journal of Virology reported the findings. Alarm ensued. The mousepox virus is closely related to smallpox -- one of the most dangerous pathogens known to humans. And the rudimentary nature of the experiment demonstrated how even basic, inexpensive microbiology can yield devastating results.

When the anthrax attacks burst into the news seven months later, the mousepox case became a lightning rod for deep-seated fears about biological weapons. The Economist reported rumors about the White House pressuring American microbiology journals to restrict publication of similar pieces. Samuel Kaplan, chair of the ASM publications board, convened a meeting of the editors in chief of the ASM's nine primary journals and two review journals. Hoping to head off government censorship, the organization -- while affirming its earlier decision -- ordered its peer reviewers to take national security and the society's code of ethics into account.

Not only publications came under pressure, but research itself. In spring 2002 the newly formed Department of Homeland Security developed an information-security policy to prevent certain foreign nationals from gaining access to a range of experimental data. New federal regulations required that particular universities and laboratories submit to unannounced inspections, register their supplies and obtain security clearances. Legislation required that all genetic engineering experiments be cleared by the government.

On the mousepox front, however, important developments were transpiring. Because the Australian research had entered the public domain, scientists around the world began working on the problem. In November 2003, St. Louis University announced an effective medical defense against a pathogen similar to -- but even more deadly than -- the one created in Australia. This result would undoubtedly not have been achieved, or at least not as quickly, without the attention drawn by the ASM article.

The dissemination of nuclear technology presents an obvious comparison. The 1946 Atomic Energy Act classifies nuclear information "from birth." Strong arguments can be made in favor of such restrictions: The science involved in the construction of the bomb was complex and its application primarily limited to weapons. A short-term monopoly was possible. Secrecy bought the United States time to establish an international nonproliferation regime. And little public good would have been achieved by making the information widely available.

Biological information and the issues surrounding it are different. It is not possible to establish even a limited monopoly over microbiology. The field is too fundamental to the improvement of global public health, and too central to the development of important industries such as pharmaceuticals and plastics, to be isolated. Moreover, the list of diseases that pose a threat ranges from high-end bugs, like smallpox, to common viruses, such as influenza. Where does one draw the line for national security?

Experience suggests that the government errs on the side of caution. In 1951, the Invention Secrecy Act gave the government the authority to suppress any design it deemed detrimental to national defense. Certain areas of research-- atomic energy and cryptography -- consistently fell within its purview. But the state also placed secrecy orders on aspects of cold fusion, space technology, radar missile systems, citizens band radio voice scramblers, optical engineering and vacuum technology. Such caution, in the microbiology realm, may yield devastating results. It is not in the national interest to stunt research into biological threats.

In fact, the more likely menace comes from naturally occurring diseases. In 1918 a natural outbreak of the flu infected one-fifth of the world's population and 25 percent of the United States'. Within two years it killed more than 650,000 Americans, resulting in a 10-year drop in average lifespan. Despite constant research into emerging strains, the American Lung Association estimates that the flu and related complications kill 36,000 Americans each year. Another 5,000 die annually from food-borne pathogens -- an extraordinarily large number of which have no known cure. The science involved in responding to these diseases is incremental, meaning that small steps taken by individual laboratories around the world need to be shared for larger progress to be made.

The idea that scientific freedom strengthens national security is not new. In the early 1980s, a joint Panel on Scientific Communication and National Security concluded security by secrecywasuntenable. Its report called instead for security by accomplishment -- ensuring strength through advancing research. Ironically, one of the three major institutions participating was the National Academy of Sciences -- the body that suspended publication of the milk article earlier this month.

The government has a vested interest in creating a public conversation about ways in which our society is vulnerable to attack. Citizens are entitled to know when their milk, their water, their bridges, their hospitals lack security precautions. If discussion of these issues is censored, the state and private industry come under less pressure to alter behavior; indeed, powerful private interests may actively lobby against having to install expensive protections. And failure to act may be deadly.

Terrorists will obtain knowledge. Our best option is to blunt their efforts to exploit it. That means developing, producing and stockpiling effective vaccines. It means funding research into biosensors -- devices that detect the presence of toxic substances in the environment -- and creating more effective reporting requirements for early identification of disease outbreaks. And it means strengthening our public health system.

For better or worse, the cat is out of the bag -- something brought home to me last weekend when I visited the Tech Museum of Innovation in San Jose. One hands-on exhibit allowed children to transfer genetic material from one species to another. I watched a 4-year-old girl take a red test tube whose contents included a gene that makes certain jellyfish glow green. Using a pipette, she transferred the material to a blue test tube containing bacteria. She cooled the solution, then heated it, allowing the gene to enter the bacteria. Following instructions on a touch-screen computer, she transferred the contents to a petri dish, wrote her name on the bottom, and placed the dish in an incubator. The next day, she could log on to a Web site to view her experiment, and see her bacteria glowing a genetically modified green.

In other words, the pre-kindergartener (with a great deal of help from the museum) had conducted an experiment that echoed the Australian mousepox study. Obviously, this is not something the child could do in her basement. But just as obviously, the state of public knowledge is long past anyone's ability to censor it.

Allowing potentially harmful information to enter the public domain flies in the face of our traditional way of thinking about national security threats. But we have entered a new world. Keeping scientists from sharing information damages our ability to respond to terrorism and to natural disease, which is more likely and just as devastating. Our best hope to head off both threats may well be to stay one step ahead.

All Publications button
1
Publication Type
Commentary
Publication Date
Journal Publisher
Washington Post
Authors
Paragraphs

Motivated by the difficulty of biometric systems to correctly match fingerprints with poor image quality, we formulate and solve a game-theoretic formulation of the identification problem in two settings: U.S. visa applicants are checked against a list of visa holders to detect visa fraud, and visitors entering the U.S. are checked against a watchlist of criminals and suspected terrorists. For three types of biometric strategies, we solve the game in which the U.S. Government chooses the strategy's optimal parameter values to maximize the detection probability subject to a constraint on the mean biometric processing time per legal visitor, and then the terrorist chooses the image quality to minimize the detection probability. At current inspector staffing levels at ports of entry, our model predicts that a quality-dependent two-finger strategy achieves a detection probability of 0.733, compared to 0.526 under the quality-independent two-finger strategy that is currently implemented at the U.S. border. Increasing the staffing level of inspectors offers only minor increases in the detection probability for these two strategies. Using more than two fingers to match visitors with poor image quality allows a detection probability of 0.949 under current staffing levels, but may require major changes to the current U.S. biometric program. The detection probabilities during visa application are {approx}11-22% smaller than at ports of entry for all three strategies, but the same qualitative conclusions hold.

All Publications button
1
Publication Type
Journal Articles
Publication Date
Journal Publisher
Proceedings of the National Academy of Sciences
Authors
Lawrence M. Wein
-

Despite the increasing centrality of computer software in modern weapons systems, computing remains relatively underrepresented in public debates about weapons policy. For example, in 1991, a software glitch in the Patriot missile defense system killed 28 people, yet physicists remain the most prominent technical critics of this system. This talk suggests that the different patterns of political intervention exhibited by physicists and computer experts cannot be explained by technical relevance. It suggests alternate explanations by examining the processes by which technical judgments are generated and rendered authoritative in the political arena, using insights from science and technology studies. These processes are then illustrated by comparing how computer experts and physicists intervened in political controversy about the feasibility of 'Star Wars', President Ronald Reagan's proposal to develop a missile defense that would render the massive Soviet nuclear arsenal 'impotent and obsolete.' I compare how critical groups of physicists and computer professionals attempted to persuade the public that a perfect missile shield could not be built. This analysis suggests that sharp differences in the two groups' technical frames of analysis, rhetoric, and professional organizations all contributed to the physicists' ability to demonstrate a much higher level of consensus and authority in the political arena.

Reuben W. Hills Conference Room

0
Affiliate
slayton_headshot.jpg PhD

Slayton’s research and teaching examine the relationships between and among risk, governance, and expertise, with a focus on international security and cooperation since World War II. Slayton’s current book project, Shadowing Cybersecurity, examines the historical emergence of cybersecurity expertise. Shadowing Cybersecurity shows how efforts to establish credible expertise in corporate, governmental, and non-governmental contexts have produced varying and sometimes conflicting expert practices. Nonetheless, all cybersecurity experts wrestle with the irreducible uncertainties that characterize intelligent adversaries, and the fundamental inability to prove that systems are secure. The book shows how cybersecurity experts have paradoxically gained credibility by making threats and vulnerabilities visible, while acknowledging that more always remain in the shadows.

Slayton’s first book, Arguments that Count: Physics, Computing, and Missile Defense, 1949-2012 (MIT Press, 2013), shows how the rise of a new field of expertise in computing reshaped public policies and perceptions about the risks of missile defense in the United States. In 2015, Arguments that Count won the Computer History Museum Prize. In 2016, Slayton was awarded a National Science Foundation CAREER grant for her project “Enacting Cybersecurity Expertise.” In 2019, Slayton was also a recipient of the United States Presidential Early Career Award for Scientists and Engineers, for her NSF CAREER project.

CV
Date Label
Rebecca Slayton Science Fellow CISAC
Seminars
Subscribe to Science and Technology