Foresight Institute Public Policy
Foresight Institute focuses its public policy research on maximizing the benefits and minimizing the downsides of technologies of fundamental importance for the human future.
Individuals and organizations are invited to participate in Foresight policy activities. We encourage your suggestions for policy study topics, and critiques of our positions on the issues. We are particularly interested in cooperating with other organizations in policy studies on how nanotechnologies, AI, and cybertechnoloies will affect the public. Individuals are invited to join as Foresight members, and corporations can participate through corporate membership, conference sponsorship, or underwriting policy studies of mutual interest. Foundations and other organizations with an interest in our work should speak with Christine Peterson, Foresight’s Co-Founder.
Foresight Institute uses a variety of processes to develop and deliver policy education and recommendations. These include hosting policy workshops (see workhsop reports on AI strategy below), commissioning policy studies, speaking on policy topics for diverse audiences, testifying for government committees, and briefing the press on policy matters. Please see below for a recent briefing, The AI Safety Executive Briefing, given by Allison Duettmann, Foresight Institute, at the 2018 O’Reilly AI Conference.
Artificial General Intelligence: Coordination & Great Powers
The 2018 AGI strategy meeting is based on the observation that to make progress on AI safety requires making progress in several sub-domains, including ethics, technical alignment, cybersecurity, and coordination (Duettmann, 2018). Ethics, technical alignment, and cybersecurity all contain a number of very hard problems. Given that solving those problems requires time, ensuring coordination among actors that allows for cooperation on those problems, while avoiding race-dynamics that lead to corner-cutting on safety issues is a primary concern on the path to AI safety. Coordination is itself a very hard problem but making progress on coordination would be beneficial for ethics, technical alignment, and cybersecurity concerns also. Since coordination for AGI safety can involve, at least partly, existing entities and actors, and there already exists theoretical literature and historical precedents of other coordination problems from which we can learn, coordination is a goal that we can effectively work toward today. We can and we must. Given current geopolitical developments, including but not limited to dueling tariff plans between China and the US, signs of potential resurgent nuclear proliferation and AI military arms race dynamics, making strides toward AGI coordination becomes ever more urgent. Finally, identifying potential avenues for AGI coordination among important global actors can have collateral advantages for coordination on other risks as well. While the meeting’s focus was on AGI coordination, most other anthropogenic risks, e.g., arising from potential biotechnology weapons, require coordination as well. Thus, while some findings in this meeting are AGI-specific, other pointers on coordination may provide a useful starting point for creating an overall policy framework that promotes robustness, resiliency, or even antifragility.
Artificial General Intelligence: Timeframes & Policy
The 2017 meeting on AGI: Timeframes & Policy was initiated by the observation that some researchers’ timelines for AGI arrival were shortening and the perceived increased urgency for drafting potential policy responses to the related arrival scenarios. This report outlines participants’ timeline estimates for the achievement of Artificial General Intelligence and problems associated with arriving at timeline estimates. Rather than focusing on investigating exact timelines in more detail, it is more instructive to consider different high-risk scenarios caused by Artificial Intelligence. The main part of the report focuses on three high-risk scenarios, (1) cybersecurity, (2) near-term AI concerns, and (3) cooperation leading up to Artificial General Intelligence. While some immediate recommendations for further investigation of potential policy responses were made, the meeting’s main intention was not to reach consensus on specific topics but to open up much-needed dialogue and avenues for cooperation on topics of high importance for policy considerations pertaining to Artificial Intelligence.
Strategies for the Reduction of Catastrophic and Existential Risks
Foresight Institute presented its research on the reduction of catastrophic and existential risks at the First Colloquium on Catastrophic and Existential Risk, held by the B. John Garrick Institute for the Risk Sciences at the UCLA Luskin Convention Center, March 27-29, 2017. The paper, entitled Cyber, Nano, and AGI Risks: Decentralized Approaches to Reducing Risks, authored by Christine Peterson, Mark S. Miller, and Allison Duettmann, all Foresight Insitute, is published in the Conference Proceedings and at Google AI Research.
Policy White Papers
As a nonprofit research institute with the goal of promoting beneficial technologies, Foresight Institute has included the development of public policy recommendations in its work since the beginning. In our policy research we implement our organization’s values of openness, good science, and respect for public involvement. We attempt to take a moderate, non-partisan stance, examining both expected benefits and possible downsides to technology and suggesting ways to maximize the former and minimize the latter. Our timeframe of interest is broader than some other participants in policy debates, ranging from near term to very long term.
Policy development within Foresight begins with an informal process of consensus through discussion among the board, staff, advisors, members, and the public, both online and at our meetings. Based on established scientific fact and projected technological possibilities, we develop potential scenarios and policy options likely to affect these in positive ways. We then spread and test these ideas by interacting with the public, researchers, and policymakers via the web, lectures, testimony, journal articles, and white papers.
Previous work on public policy issues by Foresight directors, staff, and associates, including relevant papers presented at Foresight Conferences:
“Applying Nanotechnology to the Challenges of Global Poverty“
“Nanotechnology for Clean Energy and Resources“
“Nanotechnology, Resources, and Pollution Control“
“Balancing the National Nanotechnology Initiative’s R&D Portfolio“
“Testimony to U.S. Senate Committee on Commerce, Science, and Transportation’s Subcommittee on Science, Technology, and Space hearing on New Technologies for a Sustainable World“
“Testimony to the U.S. House of Representatives Committee on Science, Subcommittee on Basic Research“
“Testimony for the Committee on Science, U.S. House of Representatives“
“Testimony on Societal Implications of Nanotechnology for the U.S. House Committee on Science“
“The Future of Nanotechnology: Molecular Manufacturing“
“Nanotechnology: from Feynman to Funding“
“Open Sourcing Nanotechnology Research and Development: Issues and Opportunities“
“Safe exponential manufacturing“
“Environmental regulation of nanotechnology: Some preliminary observations“
“Nanotechnology and regulatory policy: Three futures“
“Legal problems of nanotechnology: An overview“
“Nanotechnology: from Feynman to the Grand Challenge of Molecular Manufacturing”, IEEE Technology and Society publication abstract, PDF
“Foresight Guidelines Version 6.0: Foresight Guidelines for Responsible Nanotechnology Development“
“Assessing the Potential of Molecular Nanotechnology for Space Operations“
“Molecular Nanotechnology for Space Operations“
“Strategies and Survival“
“Safety, Accidents, and Abuse“
Policy Issues Briefs
- U.S. Federal Nanotech R&D Funding, by Jacob Heller and Christine Peterson
- Human Enhancement and Nanotechnology, by Jacob Heller and Christine Peterson
- Nanoparticle Safety, by Jacob Heller and Christine Peterson
- Nanotech and IP, by Jacob Heller and Christine Peterson
- Nanotech Export Controls, by Jacob Heller and Christine Peterson
- Nanotechnology, Poverty, and Disparity, by Jacob Heller and Christine Peterson
- Nanotechnology and Surveillance, by Jacob Heller and Christine Peterson
- “Valley of Death” in Nanotechnology Investing, by Jacob Heller and Christine Peterson
Discussions of Policy Topics
- “Balancing the National Nanotechnology Initiative’s R&D Portfolio”, by Neil Jacobstein, Ralph Merkle, and Robert Freitas (PDF – 68 KB)
- Foresight Position Statement on Avoiding High-Tech Terrorism
- Nanotechnology: Six Lessons from Sept. 11
- “How good scientists reach bad conclusions” by Ralph C. Merkle
- “Environmental Regulation of Nanotechnology: Some Preliminary Observations”, by Glenn Harlan Reynolds, PDF format, 112 KB.Requires ACROBAT READER by ADOBE.
- “A Dialog on Dangers” by K. Eric Drexler
- “Regulating Nanotechnology Development” by David Forrest
- Arguments made by Arthur Kantrowitz about “The Weapon of Openness” are crucial to thinking about policy toward nanotechnology.
- Analysis of “Some Limits to Global Ecophagy by Biovorous Nanoreplicators, with Public Policy Recommendations”, written by Robert A. Freitas Jr., takes a technical look at some classic “gray goo” scenarios and concludes that early detection is the key to an effective defense.
- The topics page for the May Senior Associates Gathering “Engines of Creation 2000: Confronting Singularity” provides a primer on issues to be faced with the advent of molecular nanotechnology.
- “Nanotechnology and Global Security”, a talk presented at the Fourth Foresight Conference on Molecular Nanotechnology by Admiral David E. Jeremiah, United States Navy (Retired), former Vice Chairman of the Joint Chiefs of Staff
- “Nanotechnology and International Security” was presented at the Fifth Foresight Conference on Molecular Nanotechnology by Mark A. Gubrud
- “Molecular Nanotechnology and the World System”, by Thomas McCarthy
- “Law Enforcement and Emerging Technology”, a Guest Viewpoint presented in Update 49 by Captain Thomas J. Cowper, New York State Police.
- Essays to explore basic aspects of human nature and political economy that will impact how emerging technologies affect the societies into which they emerge.
As advancing technology enables more thorough and less expensive surveillance, what issues are worth exploring to ensure that these capabilities are deployed for beneficial purposes.
Nanosurveillance: Issues meriting exploration
Huge economic, environmental, health, and security benefits are expected from the coming nanotechnology-enabled Sensor Age—if these devices are accepted by the public. Is there an open-source solution to the looming conflict between those using sensors to collect data and those whose data is being collected?
Open Source Sensing Initiative—a Foresight Institute project