Livestream: Please click here to join the livestream webinar via Zoom or log-in with webinar ID 819 377 559.
About this Event: Are we still in the Nuclear Age? Is this the Age of AI? Are we entering the Age of Synthetic Biology? Technologies such as nuclear power, artificial intelligence, and synthetic biology are “epochal,” as in epoch-making: They redefine the world in which we live, introducing new uncertainties and risks, as well as new responsibilities—but for whom? World-changing technologies are inextricably political entities, affecting distribution of power and resources throughout and between societies. However, despite decades of academic and practical experience with the political dimensions of technology, contemporary societies appear to be inadequately prepared to cope skillfully with the new worlds that their scientists and technologists are creating. Why? What lessons can be learned from existing epochal technologies that might help societies understand, evaluate, and direct their technical potentials and trajectories into the future? Within the context of growing concern about national security threats that may emerge from germline genetic engineering, Greene will consider the cultivation of a “culture of responsibility” in synthetic biology labs. Polleri will examine a set of public controversies surrounding the role of nuclear power and the threat of radioactive contamination in a post-Fukushima Japan. Garvey will map out the risk landscape surrounding AI systems and discuss strategic approaches to coping with uncertainty and disagreement in protecting against catastrophic technological risk.
About the Speakers:
Colin Garvey is a Postdoctoral Fellow at CISAC and the Stanford Institute for Human-Centered Artificial Intelligence. He studies the history and political economy of artificial intelligence (AI), among other things, with a comparative focus on Japan. He is currently a PhD Candidate and Humanities, Arts, and Social Sciences Fellow in the Science and Technology Studies Department at Rensselaer Polytechnic Institute (RPI). His dissertation “Averting AI Catastrophe, Together: On the Democratic Governance of Epochal Technologies,” challenges utopian/dystopian thinking about AI by explaining how more democratic governance of the technology is not only necessary to avert catastrophe, but also to steer AI R&D more safely, fairly, and wisely. He won Best Early Career Paper at the 2017 meeting of the Society for the History of Technology for “Broken Promises & Empty Threats: The Evolution of AI in America, 1956-1996.” His research article on the history and political economy of Japanese AI, “An Alternative to Neoliberal Modernity: The ‘Threat’ of the Japanese Fifth Generation Computer Systems Project,” will be published in a forthcoming special issue of Pacific Historical Review. His work has been supported by the National Science Foundation (NSF). In addition to an MS in STS from RPI, Colin double-majored in Japanese and Media Studies at Vassar College. Before starting graduate school, Colin spent several years teaching in Japan, where he became a Zen Buddhist monk. Colin is fluent in Japanese and freelances as a translator of Japanese books and scientific articles.
Daniel Greene is a Postdoctoral Fellow at CISAC, where he works with Dr. Megan Palmer on strategies for risk governance in biotechnology. He uses computational social science methods to identify factors that influence the decisions of biology labs to engage in potentially risky research. Daniel completed a PhD at the Stanford University Graduate School of Education, where he worked with Prof. Carol Dweck to develop and test social-psychological interventions to improve student motivation at scale. His dissertation identified and influenced novel psychological constructs for motivating unemployed and underemployed adults to pursue job-skill training. Outside of academia, Daniel worked for five years as a data scientist and product developer at the Project for Education Research That Scales, a nonprofit that develops resources and infrastructure for disseminating best practices from education research. He also holds a BA in Cognitive Science (Honors) from Rutgers University. Daniel's work has been supported by the Open Philanthropy Project, the Carnegie Foundation for the Advancement of Teaching, the Gates Foundation, the Stanford Digital Learning Forum, and an Amir Lopatin Fellowship.
Dr. Maxime Polleri is a MacArthur Nuclear Security Postdoctoral Fellow at the Center for International Security and Cooperation. As an anthropologist of science and technology, his work examines the governance of risk in the aftermath of technological disasters implying environmental contamination. His current research focuses on Japanese public and state responses to the release of radioactive contamination after the 2011 Fukushima nuclear disaster. He has published articles and op-ed in Social Studies of Science, American Ethnologist, Anthropology Today, Anthropology Now, Medical Anthropology Quarterly Second Spear, Somatosphere, Bulleting of the Atomic Scientists, and The Diplomat.