Secure AI Grant
This grant program seeks to fund projects that address risks from advanced AI, focusing on four underexplored approaches: security technologies, automating research and forecasting, multi-agent security, and neurotechnology.
Security Technologies
We fund infrastructure that secures frontier AI systems and digital environments, including automated red-teaming, formal verification, and decentralised cryptographic tools.
Automating Research and Forecasting
We support tools that use AI to automate scientific research workflows and generate reliable forecasts of technological change. This includes open-source assistants, continuous modelling systems, and infrastructure for collaborative forecasting.
Multi-Agent Security
We fund work that enables safe and cooperative interactions between humans and AIs—focusing on collusion prevention, group coordination, and pro-social AI agents.
Neurotechnology
We support neuroscience-inspired approaches to AI alignment and human capability—such as brain-computer interfaces, lo-fi brain emulations, and brain-aligned AI models.
A call for project proposals
As advanced AI systems become more capable and widespread, the consequences of their success – or failure – will shape the trajectory of civilisation. We believe this future can go exceptionally well, but it is not guaranteed.
To reduce the risk of catastrophic outcomes and support safer trajectories, we are seeking project proposals within four underexplored areas:
- Security technologies for AI-relevant systems
- Automating research and forecasting
- Safe multi-agent scenarios
- Neurotechnology to integrate with, or compete against, AGI
The specific work we are interested in funding is listed on the pages linked above. If your proposal falls within one of our four focus areas, but does not align with the specific types of work we’ve outlined, you are still welcome to apply. However, please note that such proposals are held to a significantly higher bar. We do not accept proposals that fall outside our four focus areas.
How to apply?
Complete the application form linked at the top of this page. Applications are accepted year-round and reviewed quarterly. Submission deadlines are:
- March 31
- June 30
- September 30
- December 31
Proposals are first reviewed in-house for fit and quality. Strong submissions are sent to technical advisors for further evaluation. If your proposal advances, we may follow up with written questions or a short call. We aim to make decisions within eight weeks of each deadline.
Who can apply?
We accept applications from individuals, teams, and organizations. Both non-profit and for-profit organizations are welcome to apply, but for-profits should be prepared to motivate why they need grant funding.
Funding and terms
- We award between $4.5–5.5M in total funding annually. Grants typically range from $10,000 to over $300,000, but we do not set fixed minimums or maximums. Applicants should request the amount they believe is appropriate, supported by a clear budget and scope.
- We fund both short-term and multi-year projects. For longer or higher-budget work, we may disburse funds in tranches, with later payments contingent on progress.
- We can fund overhead costs up to 10% of direct research costs, where these directly support the funded work.
- Grants are subject to basic reporting requirements. Grantees are expected to submit brief progress updates at regular intervals, describing use of funds and progress against agreed milestones.
- Tax obligations vary by country and organization type. Applicants are responsible for understanding and complying with any applicable tax requirements.
For full eligibility criteria, financial details, and documentation requirements, see our Grant Guidelines and Conditions →
Further questions or feedback?
Please contact us at [email protected]
Grantees

Ed Boyden
Boyden Lab

Uwe Kortshagen
University of Minnesota

Konrad Kording
Kording Lab

Chris Lakin
Independent

Florian Tramer
ETH Zurich

Harriet Farlow
Mileva Security Labs

Apart Research

OpenMined

Benjamin Wilson
Metaculus

Lovkush Agarwal
University of Cambridge

Chandler Smith
Cooperative AI Foundation

Jonas Emanuel Müller
Convergence Analysis

Keenan Pepper
Salesforce

Kola Ayonrinde
MATS

Joel Pyykkö
Independent

Roland Pihlakas
Simplify (Macrotec LLC)

Toby D. Pilditch
University of Oxford

Leonardo Christov-Moore
Institute for Advanced Consciousness Studies

MATS Research

Bradley Love
UCL

Catalin Mitelut
NYU and University of Basel

Marc Carauleanu
AE Studio

Maximillian Schons
Eon Systems

Isaak Freeman
Massachusetts Institute of Technology

PK Douglas
University College London (Honorary)

Tom Burns
SciAI Center – Cornell University