Engage / Grants / Multi-Agent Security

Request for Proposals: Multi-Agent Security

We are seeking proposals that enables safe and cooperative interactions between humans and AIs—focusing on collusion prevention, group coordination, and pro-social AI agents.

Focus Areas

Specific work we are interested in funding:

 

  • AI for preventing collusion and manipulation
  • Pareto-preferred coordination agents
  • AI-enhanced group coordination

Multi-agent security

As advanced AI systems proliferate, they will increasingly operate in environments populated by other AIs and by humans—with complex, often conflicting goals. This multi-agent context raises profound safety challenges and new opportunities for cooperation. Interactions between multiple agents—whether human, artificial, or hybrid—can produce emergent behaviors that are hard to predict, control, or align.

Without careful design, such systems could foster deception, collusion, power concentration, or societal fragmentation. Yet if properly guided, multi-agent systems could instead enable an explosion in technologies for mutually-beneficial cooperation, benefiting individual one-to-one interactions as well as society-wide collaboration on collective action problems. Building safe multi-agent systems is not just about designing good individual agents. It is about shaping the ecosystem of interactions, incentives, and norms that will govern how AIs and humans co-evolve together.

What we want to fund

We seek proposals that explore how to upgrade today’s cooperation infrastructure of norms, laws, rights, and institutions to ensure that humans and AI systems can interact safely and beneficially in multi-agent settings. We are particularly interested in prototypes that address new game-theoretic dynamics and principal-agent problems that will arise in interactions between AI agents and humans, mitigate risks of collusion and deception, and enhance mechanisms for trustworthy communication, negotiation, and coordination. We aim to de-risk the deployment of cooperative AI and create pathways where AI agents strengthen rather than undermine human-aligned cooperation.

Early demonstrations—such as AI systems that assist humans in negotiation or agents that autonomously identify and enforce mutually beneficial deals or detect and punish deception— could lay the foundation for a future where interactions among AIs and humans are predictably safe, transparent, and welfare-improving. We also welcome projects that lift collective intelligence at the group level, using AI to augment the processes through which groups form shared preferences, resolve conflicts, and coordinate action.

Specific work we are interested in

1. AI for preventing collusion and manipulation

Scalable solutions that demonstrably prevent collusion, manipulation, or exploitation in agent-mediated agreements.

2. Pareto-preferred coordination agents

Autonomous agents that can identify, negotiate, and enforce mutually beneficial arrangements between humans and other AI systems.

3. AI-enhanced group coordination

AI systems that enhance collective intelligence and enable more effective group coordination around shared preferences.

​​If you have a proposal which falls within multi-agent security, but does not align with the specific work we have outlined here, you are still welcome to apply. However, please note that such proposals are held to a significantly higher bar. We do not accept proposals that fall outside this area.

Our priorities

Proposals should clearly demonstrate how the work will enhance safety in multi-agent AI environments, with particular attention to preventing harmful emergent dynamics when multiple AI systems interact with each other and humans. We prioritize projects that:

Previously funded work

Examples of past projects we have funded:

How to apply?

Complete the application form linked at the top of this page. Applications are accepted year-round and reviewed quarterly. Submission deadlines are:

Proposals are first reviewed in-house for fit and quality. Strong submissions are sent to technical advisors for further evaluation. If your proposal advances, we may follow up with written questions or a short call. We aim to make decisions within eight weeks of each deadline.

Who can apply?

We accept applications from individuals, teams, and organizations. Both non-profit and for-profit organizations are welcome to apply, but for-profits should be prepared to motivate why they need grant funding.

Funding and terms

For full eligibility criteria, financial details, and documentation requirements, see our Grant Guidelines and Conditions →

Further questions or feedback?

Please contact us at [email protected]

Grantees

Chandler Smith

Cooperative AI Foundation

David Bloomin

MettaAi

Jonas Emanuel Müller

Convergence Analysis

Keenan Pepper

Salesforce

Kola Ayonrinde

MATS

Nora Ammann

PIBBSS

Joel Pyykkö

Independent

Roland Pihlakas

Simplify (Macrotec LLC)

Toby D. Pilditch

University of Oxford

Leonardo Christov-Moore

Institute for Advanced Consciousness Studies

MATS Research

Made possible by

Fund the science of the future.

Donate today