Engage / Grants / Security Technologies

Request for Proposals: Security Technologies for AI-Relevant Systems

We are seeking proposals of infrastructure that secures frontier AI systems and digital environments, including automated red-teaming, formal verification, and decentralised cryptographic tools.

Focus Areas

Specific work we are interested in funding:

 

  • AI-augmented vulnerability discovery and red-teaming
  • Provably secure architectures and privacy-enhancing cryptography
  • Decentralized AI and compute infrastructure

Security technologies for AI-relevant systems

As AI systems grow more capable, they also become more critical—and more vulnerable—components of an already insecure modern computing infrastructure. Whether embedded in decision-making, communications, or control systems, advanced AI introduces new attack surfaces and amplifies the consequences of security failures.

Traditional security paradigms, often reactive, piecemeal and human-driven, cannot scale to match the speed, scale, and complexity of AI-supported attacks. As AI systems become more autonomous and embedded, their attack surfaces and criticality will only increase. We urgently need next-generation security technologies designed to protect and defend civilization, starting with frontier AI systems and our nuclear, energy and other security-critical infrastructure from the ground up. Investing early in scalable, verifiable security technologies is essential to the safe deployment of future AI.

What we want to fund

We seek proposals that use AI and related tools to dramatically improve our ability to secure our digital infrastructure, with a focus on approaches that are high-assurance, privacy-preserving, and resilient to a rapidly changing, emerging threat landscape. We seek to fund work that enables rigorous defense—including AI-automated red-teaming and vulnerability discovery, formal verification of critical infrastructure, and scalable cryptographic technologies that distribute trust, decentralize control, and ensure accountability.

We are especially interested in pragmatic security approaches that work for today’s frontier models and hardware while also laying the groundwork for secure architectures in a world where powerful AI is widely deployed. We welcome ambitious proposals that push the boundaries of formal methods, secure computation, and privacy-preserving coordination, as well as foundational work in areas like theorem proving and backdoor detection.

Specific work we are interested in

1. AI-augmented vulnerability discovery and red-teaming

Tools that use AI to automate red-teaming and scalable vulnerability scanning, especially for software and models deployed in critical infrastructure.

2. Provably secure architectures and privacy-enhancing cryptography

Developing provable guarantees for system behavior and scalable cryptographic infrastructure to support trustworthy AI deployment:

3. Decentralized AI and compute infrastructure

Design infrastructure that distributes trust, increases transparency, and enables secure AI operation in adversarial environments:

​​If you have a proposal which falls within “security technologies for AI-relevant systems”, but does not align with the specific work we have outlined here, you are still welcome to apply. However, please note that such proposals are held to a significantly higher bar. We do not accept proposals that fall outside this area.

Our priorities

We prioritize work that strengthens the foundations of AI security through rigorous verification, privacy preservation, and resilient architecture. Projects should demonstrate a clear path toward security paradigms that can scale with increasingly powerful and autonomous AI systems. We especially welcome proposals that:

Previously funded work

Examples of past projects we have funded include:

How to apply?

Complete the application form linked at the top of this page. Applications are accepted year-round and reviewed quarterly. Submission deadlines are:

Proposals are first reviewed in-house for fit and quality. Strong submissions are sent to technical advisors for further evaluation. If your proposal advances, we may follow up with written questions or a short call. We aim to make decisions within eight weeks of each deadline.

Who can apply?

We accept applications from individuals, teams, and organizations. Both non-profit and for-profit organizations are welcome to apply, but for-profits should be prepared to motivate why they need grant funding.

Funding and terms

For full eligibility criteria, financial details, and documentation requirements, see our Grant Guidelines and Conditions →

Further questions or feedback?

Please contact us at [email protected]

Grantees

Adam Gleave

FAR AI

Chris Lakin

Independent

Florian Tramer

ETH Zurich

Harriet Farlow

Mileva Security Labs

Abhinav Singh

University of Oxford

Apart Research

OpenMined

Made possible by

Fund the science of the future.

Donate today