• About Us
    • About Us
    • Team
    • Directors & Advisors
    • Open Positions
    • Contact
  • Seminars
    • Nanotech: Molecular Machines
    • Biotech: Health Extension
    • Computation: Intelligent Cooperation
    • Neurotech: Improving Cognition
    • Space: Expanding Outward
    • Foresight: Existential Hope
      • Existential Hope Page
  • Prizes & Fellowships
    • Fellowship
    • The Longevity Prize
    • Foresight Feynman Prizes
      • Press Release for Winners 2022
    • Foresight Accelerators
  • Events
    • Global Meetups
    • Past Member Gatherings
    • 2023 Longevity Frontiers Workshop
    • 2023 Whole Brain Emulation Workshop
    • 2023 Foresight Space Workshop
    • 2023 Intelligent Cooperation: Cryptography, Security, AI
    • 2023 Foresight Molecular Systems Design Workshop
    • 2023 Vision Weekends
  • Publications
    • Nanotech: Molecular Machines
    • Biotech: Health Extension
    • Computation: Intelligent Cooperation
    • Youtube Channel
    • Podcast
    • Newsletters
  • Foresight X
    • Existential Hope
    • Gaming the Future
    • Tech Trees
  • Donate & Join
    • Personal Longevity Group
Menu
  • About Us
    • About Us
    • Team
    • Directors & Advisors
    • Open Positions
    • Contact
  • Seminars
    • Nanotech: Molecular Machines
    • Biotech: Health Extension
    • Computation: Intelligent Cooperation
    • Neurotech: Improving Cognition
    • Space: Expanding Outward
    • Foresight: Existential Hope
      • Existential Hope Page
  • Prizes & Fellowships
    • Fellowship
    • The Longevity Prize
    • Foresight Feynman Prizes
      • Press Release for Winners 2022
    • Foresight Accelerators
  • Events
    • Global Meetups
    • Past Member Gatherings
    • 2023 Longevity Frontiers Workshop
    • 2023 Whole Brain Emulation Workshop
    • 2023 Foresight Space Workshop
    • 2023 Intelligent Cooperation: Cryptography, Security, AI
    • 2023 Foresight Molecular Systems Design Workshop
    • 2023 Vision Weekends
  • Publications
    • Nanotech: Molecular Machines
    • Biotech: Health Extension
    • Computation: Intelligent Cooperation
    • Youtube Channel
    • Podcast
    • Newsletters
  • Foresight X
    • Existential Hope
    • Gaming the Future
    • Tech Trees
  • Donate & Join
    • Personal Longevity Group

Gaming the Future: The Book!

Crypto, Security, AI Workshop

There are many opportunities for progress on beneficial futures at the intersection of cryptography, security, and AI that may not be immediately obvious from within each field. This two-day event invites top researchers, builders, and funders in computing, cryptography, cryptocommerce, security and AI to explore undervalued areas for progress. Themes are loosely based on technologies highlighted in Gaming the Future.

WORKSHOP PAGE
REPORT
Workshop Recordings
Gaming the Future The Book

The Book

Gaming the Future: Technologies for Intelligent Voluntary Cooperation

A living book, and book club about technologies of intelligent voluntary cooperation.

REPORT
VISIT BOOK PAGE
Intelligent-group-2021

2021 Foresight Intelligent Cooperation Seminars

A group of researchers, engineers, and entrepreneurs in computer science, ML, cryptocommerce, and related fields who leverage those technologies to improve cooperation across humans and ultimately Artificial Intelligences. Keynotes roughly follow an unpublished book draft that proposes Intelligent Voluntary Intelligent Voluntary Cooperation as a path for different intelligences to peacefully pursue a diversity of goals while reducing potential conflicts. This report gives an overview of our 2021 recorded seminars, including a favorite slide, and a link to the full written summary and recording for those who wish to learn more.

REPORT
AI-Book

Artificial Superintelligence | Coordination & Strategy

A book co-edited by Allison Duettmann, Foresight Institute, and Roman Yampolskiy, University of Louisville, based on the Big Data and Cognitive Computing Special Issue. 

Contributions include:

Future-Ready Strategic Oversight of Multiple Artificial Superintelligence-Enabled Adaptive Learning Systems, Safe Artificial General Intelligence via Distributed Ledger Technology, A Holistic Framework for Forecasting Transformative AI, Peacekeeping Conditions for an Artificial Intelligence Society, AI Governance and the Policymaking Process: Key Considerations for Reducing AI Risk, Multiparty Dynamics and Failure Modes for Machine Learning and Artificial Intelligence, Global Solutions vs. Local Solutions for the AI Safety Problem, Beneficial Artificial Intelligence Coordination by Means of a Value Sensitive Design Approach, Towards AI Welfare Science and Policies, The Supermoral Singularity—AI as a Fountain of Values.

REPORT
Proposal

2019 Incentive Prize on Incentives: Foresight Edition

Congratulations to Anthony Aguirre, Future of Life Institute, for winning the Foresight Edition of the Incentive Prize on Incentives!  

The Prize is part of the Grand Challenge on Inclusive Stakeholding, an initiative by the Yun Family Foundation to nurture innovations that promote a better future for all through inclusive stakeholding, vested interdependent interests, and goal congruence.

The Challenge invites social innovations from any discipline, including economics, politics, arts, technology, and sciences (including the social sciences).

Please find out more about his excellent proposal below, which was chosen among hundreds of qualifying submissions.

PROPOSAL
2019-AGI-report-2

Artificial General Intelligence: Toward Cooperation

This report summarizes the main findings of the 2019 AGI Strategy Meeting on “Toward Cooperation: Framing & Solving Adversarial AGI Topics,” held in San Francisco on June 20, 2019. The 2017 meeting in this series focused on drafting policy scenarios for different AI time frames, and was followed by the 2018 meeting that focused on increasing coordination among AGI-relevant actors, especially the US and China. The 2019 meeting expanded on this topic by mapping concrete strategies toward cooperation. This includes both reframing adversarial coordination topics in cooperative terms and sketching concrete positive solutions to coordination issues. The meeting gathered representatives of major AI and AI safety organizations with policy strategists and other relevant actors with the goal of fostering cooperation amongst global AGI-relevant actors. A group of participants presented their recent efforts toward a more cooperative AI landscape, followed by discussion in small groups. While discussions followed the Chatham House Rule, a high-level summary of the sessions is available in the report. We welcome questions, comments, and feedback.

REPORT
Agi-2018-1

Artificial General Intelligence: Coordination & Great Powers

The 2018 AGI strategy meeting is based on the observation that to make progress on AI safety requires making progress in several sub-domains, including ethics, technical alignment, cybersecurity, and coordination (Duettmann, 2018). Ethics, technical alignment, and cybersecurity all contain a number of very hard problems. Given that solving those problems requires time, ensuring coordination among actors that allows for cooperation on those problems, while avoiding race-dynamics that lead to corner-cutting on safety issues is a primary concern on the path to AI safety. Coordination is itself a very hard problem but making progress on coordination would be beneficial for ethics, technical alignment, and cybersecurity concerns also. Since coordination for AGI safety can involve, at least partly, existing entities and actors, and there already exists theoretical literature and historical precedents of other coordination problems from which we can learn, coordination is a goal that we can effectively work toward today. While the meeting’s focus was on AGI coordination, most other anthropogenic risks, e.g., arising from potential biotechnology weapons, require coordination as well. This report highlights potential avenues for progress.

REPORT
AGI-2017

Artificial General Intelligence: Timeframes & Policy

The 2017 meeting on AGI: Timeframes & Policy was initiated by the observation that some researchers’ timelines for AGI arrival were shortening and the perceived increased urgency for drafting potential policy responses to the related arrival scenarios. This report outlines participants’ timeline estimates for the achievement of Artificial General Intelligence and problems associated with arriving at timeline estimates. Rather than focusing on investigating exact timelines in more detail, it is more instructive to consider different high-risk scenarios caused by Artificial Intelligence. The main part of the report focuses on three high-risk scenarios, (1) cybersecurity, (2) near-term AI concerns, and (3) cooperation leading up to Artificial General Intelligence. While some immediate recommendations for further investigation of potential policy responses were made, the meeting’s main intention was not to reach consensus on specific topics but to open up much-needed dialogue and avenues for cooperation on topics of high importance for policy considerations pertaining to Artificial Intelligence.

REPORT
Cyber-Nano-AGI

Strategies for the Reduction of Catastrophic and Existential Risks

Foresight Institute presented its research on the reduction of catastrophic and existential risks at the First Colloquium on Catastrophic and Existential Risk, held by the B. John Garrick Institute for the Risk Sciences at the UCLA Luskin Convention Center, March 27-29, 2017.  The paper, entitled Cyber, Nano, and AGI Risks: Decentralized Approaches to Reducing Risks, authored by Christine Peterson, Mark S. Miller, and Allison Duettmann, all Foresight Insitute, is published in the Conference Proceedings and at Google AI Research. 

REPORT
  • Mission
  • Join
  • Contact
  • About Nanotechnology
    • Foresight Nanotechnology Roadmap
  • Do Not Sell My Information
Menu
  • Mission
  • Join
  • Contact
  • About Nanotechnology
    • Foresight Nanotechnology Roadmap
  • Do Not Sell My Information
Facebook Twitter Youtube Linkedin Podcast Discord Spotify

Search Foresigh Institute