To help AI development benefit humanity, Foresight Institute has held various workshops over the past years, such as AGI & Great Powers, AGI: Toward Cooperation, in addition to two technical meetings in 2022 and 2023 focusing on Cryptography, Security, and AI. Recently, we also launched a Grants Program that funds work on AI security risks, cryptography tools for safe AI, and beneficial multipolar AI scenarios.
While these and other efforts have shed light on a diversity of promising projects to support, the intersection between the AI domain and the cryptography and security domain remains nascent but promising. This is why our workshop invited leading researchers, entrepreneurs, and funders to collaborate across fields and explore tools and architectures that help humans and AIs cooperate.
The intersection of cryptography, security, and AI is still nascent but could be of fundamental importance for beneficial futures. Some opportunities for progress were highlighted in our 2022 workshop, some are entirely new.
There are many opportunities for progress on beneficial futures at the intersection of cryptography, security, and AI that may not be immediately obvious from within each field. This two-day event invites top researchers, builders, and funders in computing, cryptography, cryptocommerce, security and AI to explore undervalued areas for progress. Themes are loosely based on technologies highlighted in Gaming the Future.
A living book, and book club about technologies of intelligent voluntary cooperation.
A group of researchers, engineers, and entrepreneurs in computer science, ML, cryptocommerce, and related fields who leverage those technologies to improve cooperation across humans and ultimately Artificial Intelligences. Keynotes roughly follow an unpublished book draft that proposes Intelligent Voluntary Intelligent Voluntary Cooperation as a path for different intelligences to peacefully pursue a diversity of goals while reducing potential conflicts. This report gives an overview of our 2021 recorded seminars, including a favorite slide, and a link to the full written summary and recording for those who wish to learn more.
A book co-edited by Allison Duettmann, Foresight Institute, and Roman Yampolskiy, University of Louisville, based on the Big Data and Cognitive Computing Special Issue.
Contributions include:
Future-Ready Strategic Oversight of Multiple Artificial Superintelligence-Enabled Adaptive Learning Systems, Safe Artificial General Intelligence via Distributed Ledger Technology, A Holistic Framework for Forecasting Transformative AI, Peacekeeping Conditions for an Artificial Intelligence Society, AI Governance and the Policymaking Process: Key Considerations for Reducing AI Risk, Multiparty Dynamics and Failure Modes for Machine Learning and Artificial Intelligence, Global Solutions vs. Local Solutions for the AI Safety Problem, Beneficial Artificial Intelligence Coordination by Means of a Value Sensitive Design Approach, Towards AI Welfare Science and Policies, The Supermoral Singularity—AI as a Fountain of Values.
Congratulations to Anthony Aguirre, Future of Life Institute, for winning the Foresight Edition of the Incentive Prize on Incentives!
The Prize is part of the Grand Challenge on Inclusive Stakeholding, an initiative by the Yun Family Foundation to nurture innovations that promote a better future for all through inclusive stakeholding, vested interdependent interests, and goal congruence.
The Challenge invites social innovations from any discipline, including economics, politics, arts, technology, and sciences (including the social sciences).
Please find out more about his excellent proposal below, which was chosen among hundreds of qualifying submissions.
This report summarizes the main findings of the 2019 AGI Strategy Meeting on “Toward Cooperation: Framing & Solving Adversarial AGI Topics,” held in San Francisco on June 20, 2019. The 2017 meeting in this series focused on drafting policy scenarios for different AI time frames, and was followed by the 2018 meeting that focused on increasing coordination among AGI-relevant actors, especially the US and China. The 2019 meeting expanded on this topic by mapping concrete strategies toward cooperation. This includes both reframing adversarial coordination topics in cooperative terms and sketching concrete positive solutions to coordination issues. The meeting gathered representatives of major AI and AI safety organizations with policy strategists and other relevant actors with the goal of fostering cooperation amongst global AGI-relevant actors. A group of participants presented their recent efforts toward a more cooperative AI landscape, followed by discussion in small groups. While discussions followed the Chatham House Rule, a high-level summary of the sessions is available in the report. We welcome questions, comments, and feedback.
The 2018 AGI strategy meeting is based on the observation that to make progress on AI safety requires making progress in several sub-domains, including ethics, technical alignment, cybersecurity, and coordination (Duettmann, 2018). Ethics, technical alignment, and cybersecurity all contain a number of very hard problems. Given that solving those problems requires time, ensuring coordination among actors that allows for cooperation on those problems, while avoiding race-dynamics that lead to corner-cutting on safety issues is a primary concern on the path to AI safety. Coordination is itself a very hard problem but making progress on coordination would be beneficial for ethics, technical alignment, and cybersecurity concerns also. Since coordination for AGI safety can involve, at least partly, existing entities and actors, and there already exists theoretical literature and historical precedents of other coordination problems from which we can learn, coordination is a goal that we can effectively work toward today. While the meeting’s focus was on AGI coordination, most other anthropogenic risks, e.g., arising from potential biotechnology weapons, require coordination as well. This report highlights potential avenues for progress.
The 2017 meeting on AGI: Timeframes & Policy was initiated by the observation that some researchers’ timelines for AGI arrival were shortening and the perceived increased urgency for drafting potential policy responses to the related arrival scenarios. This report outlines participants’ timeline estimates for the achievement of Artificial General Intelligence and problems associated with arriving at timeline estimates. Rather than focusing on investigating exact timelines in more detail, it is more instructive to consider different high-risk scenarios caused by Artificial Intelligence. The main part of the report focuses on three high-risk scenarios, (1) cybersecurity, (2) near-term AI concerns, and (3) cooperation leading up to Artificial General Intelligence. While some immediate recommendations for further investigation of potential policy responses were made, the meeting’s main intention was not to reach consensus on specific topics but to open up much-needed dialogue and avenues for cooperation on topics of high importance for policy considerations pertaining to Artificial Intelligence.
Foresight Institute presented its research on the reduction of catastrophic and existential risks at the First Colloquium on Catastrophic and Existential Risk, held by the B. John Garrick Institute for the Risk Sciences at the UCLA Luskin Convention Center, March 27-29, 2017. The paper, entitled Cyber, Nano, and AGI Risks: Decentralized Approaches to Reducing Risks, authored by Christine Peterson, Mark S. Miller, and Allison Duettmann, all Foresight Insitute, is published in the Conference Proceedings and at Google AI Research.