Policy Research And Advocacy

Our public policy efforts are focused on maximizing the benefits and minimizing the risks of technologies of fundamental importance for the human future. We host meetings on crucial technology policy topics, commission studies, speak to diverse audiences, testify for government committees, and brief the press.

We invite individuals and organizations to participate in our policy activities. We encourage your suggestions for policy study topics, and critiques of our positions on the issues. We are particularly interested in cooperating with other organizations in policy studies on how nanotechnologies, AI, and cybertechnoloies will affect the public. Individuals are invited to join as Foresight members, and corporations can participate through conference sponsorship or by underwriting policy studies of mutual interest. 

Not all of our efforts are public but a few that are can be found below.

Flourishing Futures from COVID-19

Our Flourishing Futures from COVID-19 report summarizes ten weeks of daily sense-making on positive paths out of COVID-19, including 70+ salons, and 90+ speakers. 

Recommendations are bucketed in the sections health, investment & philanthropy, default institutions, governance architectures, coordination technologies, civil responsibility, sense-making systems, global resilience, planetary ecosystem, diverse worlds, culture & arts, flourishing.

View Report

Artificial Superintelligence | Coordination & Strategy

A book co-edited by Allison Duettmann, Foresight Institute, and Roman Yampolskiy, University of Louisville, based on the Big Data and Cognitive Computing Special Issue. 

Contributions include:

Future-Ready Strategic Oversight of Multiple Artificial Superintelligence-Enabled Adaptive Learning Systems, Safe Artificial General Intelligence via Distributed Ledger Technology, A Holistic Framework for Forecasting Transformative AI, Peacekeeping Conditions for an Artificial Intelligence Society, AI Governance and the Policymaking Process: Key Considerations for Reducing AI Risk, Multiparty Dynamics and Failure Modes for Machine Learning and Artificial Intelligence, Global Solutions vs. Local Solutions for the AI Safety Problem, Beneficial Artificial Intelligence Coordination by Means of a Value Sensitive Design Approach, Towards AI Welfare Science and Policies, The Supermoral Singularity—AI as a Fountain of Values.

BOOK

2019 Incentive Prize on Incentives: Foresight Edition 

Congratulations to Anthony Aguirre, Future of Life Institute, for winning the Foresight Edition of the Incentive Prize on Incentives!  

The Prize is part of the Grand Challenge on Inclusive Stakeholding, an initiative by the Yun Family Foundation to nurture innovations that promote a better future for all through inclusive stakeholding, vested interdependent interests, and goal congruence.

The Challenge invites social innovations from any discipline, including economics, politics, arts, technology, and sciences (including the social sciences).

Please find out more about his excellent proposal below, which was chosen among hundreds of qualifying submissions.

Proposal

Artificial General Intelligence: Toward Cooperation

This report summarizes the main findings of the 2019 AGI Strategy Meeting on “Toward Cooperation: Framing & Solving Adversarial AGI Topics,” held in San Francisco on June 20, 2019. The 2017 meeting in this series focused on drafting policy scenarios for different AI time frames, and was followed by the 2018 meeting that focused on increasing coordination among AGI-relevant actors, especially the US and China. The 2019 meeting expanded on this topic by mapping concrete strategies toward cooperation. This includes both reframing adversarial coordination topics in cooperative terms and sketching concrete positive solutions to coordination issues. The meeting gathered representatives of major AI and AI safety organizations with policy strategists and other relevant actors with the goal of fostering cooperation amongst global AGI-relevant actors. A group of participants presented their recent efforts toward a more cooperative AI landscape, followed by discussion in small groups. While discussions followed the Chatham House Rule, a high-level summary of the sessions is available in the report. We welcome questions, comments, and feedback.

REPORT

Artificial General Intelligence: Coordination & Great Powers

The 2018 AGI strategy meeting is based on the observation that to make progress on AI safety requires making progress in several sub-domains, including ethics, technical alignment, cybersecurity, and coordination (Duettmann, 2018). Ethics, technical alignment, and cybersecurity all contain a number of very hard problems. Given that solving those problems requires time, ensuring coordination among actors that allows for cooperation on those problems, while avoiding race-dynamics that lead to corner-cutting on safety issues is a primary concern on the path to AI safety. Coordination is itself a very hard problem but making progress on coordination would be beneficial for ethics, technical alignment, and cybersecurity concerns also. Since coordination for AGI safety can involve, at least partly, existing entities and actors, and there already exists theoretical literature and historical precedents of other coordination problems from which we can learn, coordination is a goal that we can effectively work toward today. While the meeting’s focus was on AGI coordination, most other anthropogenic risks, e.g., arising from potential biotechnology weapons, require coordination as well. This report highlights potential avenues for progress.

REPORT

Artificial General Intelligence: Timeframes & Policy 

The 2017 meeting on AGI: Timeframes & Policy was initiated by the observation that some researchers’ timelines for AGI arrival were shortening and the perceived increased urgency for drafting potential policy responses to the related arrival scenarios. This report outlines participants’ timeline estimates for the achievement of Artificial General Intelligence and problems associated with arriving at timeline estimates. Rather than focusing on investigating exact timelines in more detail, it is more instructive to consider different high-risk scenarios caused by Artificial Intelligence. The main part of the report focuses on three high-risk scenarios, (1) cybersecurity, (2) near-term AI concerns, and (3) cooperation leading up to Artificial General Intelligence. While some immediate recommendations for further investigation of potential policy responses were made, the meeting’s main intention was not to reach consensus on specific topics but to open up much-needed dialogue and avenues for cooperation on topics of high importance for policy considerations pertaining to Artificial Intelligence.

REPORT

Strategies for the Reduction of Catastrophic and Existential Risks

Foresight Institute presented its research on the reduction of catastrophic and existential risks at the First Colloquium on Catastrophic and Existential Risk, held by the B. John Garrick Institute for the Risk Sciences at the UCLA Luskin Convention Center, March 27-29, 2017.  The paper, entitled Cyber, Nano, and AGI Risks: Decentralized Approaches to Reducing Risks, authored by Christine Peterson, Mark S. Miller, and Allison Duettmann, all Foresight Insitute, is published in the Conference Proceedings and at Google AI Research. 

REPORT