To help AI development benefit humanity, Foresight Institute has held various workshops over the past years, such as AGI & Great Powers, AGI: Toward Cooperation, in addition to two technical meetings in 2022 and 2023 focusing on Cryptography, Security, and AI. Recently, we also launched a Grants Program that funds work on AI security risks, cryptography tools for safe AI, and beneficial multipolar AI scenarios.Ā
While these and other efforts have shed light on a diversity of promising projects to support, the intersection between the AI domain and the cryptography and security domain remains nascent but promising. This is why our upcoming workshop āAGI: Cryptography, Security & Multipolar Scenariosā invites leading researchers, entrepreneurs, and funders to collaborate across fields and explore tools and architectures that help humans and AIs cooperate.
Participants are invited to explore new opportunities, form lasting collaborations, and further cooperation toward shared long-term goals. In addition to short presentations, working groups, and project development, we offer mentorship hours, open breakouts, and speaker and sponsor gatherings.
Hardware Governance
Collective Intelligence / Artificial Intelligence
How do models learn when there are privacy constraints
Challenges and Solutions for AI Security in the Age of Multi-Polar AGI
Securing human review with AI of AI
How to prevent LLMs from relearning undesired concepts
Securing model weights at the frontier
AI threat models, hacking, deception, and manipulation
Autonomy is all we need: A bottom-up approach to AGI alignment for a massively multi-polar future
AI as public infrastructure
Harnessing the Heft: Securing LLM Weights
SecureDNA as a model for safe AI capability evals
Mutlipolar Concerns for Technical AI Governance
AI, Decentralization, and Regulating Emerging Technologies
The Foresight Intelligent Cooperation Tech Tree
Neartermist safety: incentive-compatible directions for large model oversight
How do multipolar scenarios get exacerbated by AI?
Wargaming for Possible TAI Futures
What should multi-agent alignment aim to achieve
Adversarial Scalable Oversight for Truthfulness
AI: Will it help solve our data mayhem problem or make it worse?
Agency Enhancement as a Beneficial Target
The Institute, top floor of the Salesforce Tower, San Francisco.
The Institute is situated in the Salesforce Tower in San Francisco, the home of many of the worldās great innovators including incredible talents within the fields of art, science, medicine, and technology.