Request for Proposals: Security Technologies for AI-Relevant Systems
We are seeking proposals of infrastructure that secures frontier AI systems and digital environments, including automated red-teaming, formal verification, and decentralised cryptographic tools.
Focus Areas
Specific work we are interested in funding:
-
AI-augmented vulnerability discovery and red-teaming
-
Provably secure architectures and privacy-enhancing cryptography
-
Decentralized AI and compute infrastructure
Security technologies for AI-relevant systems
As AI systems grow more capable, they also become more critical—and more vulnerable—components of an already insecure modern computing infrastructure. Whether embedded in decision-making, communications, or control systems, advanced AI introduces new attack surfaces and amplifies the consequences of security failures.
Traditional security paradigms, often reactive, piecemeal and human-driven, cannot scale to match the speed, scale, and complexity of AI-supported attacks. As AI systems become more autonomous and embedded, their attack surfaces and criticality will only increase. We urgently need next-generation security technologies designed to protect and defend civilization, starting with frontier AI systems and our nuclear, energy and other security-critical infrastructure from the ground up. Investing early in scalable, verifiable security technologies is essential to the safe deployment of future AI.
What we want to fund
We seek proposals that use AI and related tools to dramatically improve our ability to secure our digital infrastructure, with a focus on approaches that are high-assurance, privacy-preserving, and resilient to a rapidly changing, emerging threat landscape. We seek to fund work that enables rigorous defense—including AI-automated red-teaming and vulnerability discovery, formal verification of critical infrastructure, and scalable cryptographic technologies that distribute trust, decentralize control, and ensure accountability.
We are especially interested in pragmatic security approaches that work for today’s frontier models and hardware while also laying the groundwork for secure architectures in a world where powerful AI is widely deployed. We welcome ambitious proposals that push the boundaries of formal methods, secure computation, and privacy-preserving coordination, as well as foundational work in areas like theorem proving and backdoor detection.
Specific work we are interested in
1. AI-augmented vulnerability discovery and red-teaming
Tools that use AI to automate red-teaming and scalable vulnerability scanning, especially for software and models deployed in critical infrastructure.
2. Provably secure architectures and privacy-enhancing cryptography
Developing provable guarantees for system behavior and scalable cryptographic infrastructure to support trustworthy AI deployment:
- Improving the efficiency of cryptographic and related technologies like MPC, zero-knowledge proofs, and differential privacy to build privacy-preserving, decentralizing approaches to AI auditing, governance and cooperation.
- AI tools that assist with increasing the tractability of formal verification for real-world systems, starting at security-critical infrastructure.
- Designing or proving properties of secure system architectures.
3. Decentralized AI and compute infrastructure
Design infrastructure that distributes trust, increases transparency, and enables secure AI operation in adversarial environments:
- Isolated confidential compute environments with trusted execution, secure boot, tamper-proof logging, and hardware attestation.
- Support for community- or individually-owned decentralized compute hubs for running AI models outside centralized control.
If you have a proposal which falls within “security technologies for AI-relevant systems”, but does not align with the specific work we have outlined here, you are still welcome to apply. However, please note that such proposals are held to a significantly higher bar. We do not accept proposals that fall outside this area.
Our priorities
We prioritize work that strengthens the foundations of AI security through rigorous verification, privacy preservation, and resilient architecture. Projects should demonstrate a clear path toward security paradigms that can scale with increasingly powerful and autonomous AI systems. We especially welcome proposals that:
- Focus on practical implementation and efficiency gains to get to performance at AI scale.
- Create decentralized infrastructure that distributes trust and prevents unhealthy concentration of control.
- Leverage AI to automate the defense pipeline and keep up to speed with increasingly AI-supported offense.
- Take inspiration from disciplines across security engineering and cryptography, to develop new, innovative AI governance and cooperation solutions.
Previously funded work
Examples of past projects we have funded include:
- Formalizing Threat Models for Cryptographic Security of AI – Development of formal threat models for using cryptography to secure AI systems against adversarial examples and attacks on model watermarking, identifying overlooked vulnerabilities through rigorous modeling.
- Offensive Cyber Capability Evaluation for LLMs – Creation of realistic, stateful capture-the-flag scenarios to assess the offensive capabilities of large language models, helping quantify risks from misaligned or criminal misuse of advanced AI systems.
- Professional GenAI Security Training and CTF Exercises – Delivery of adversary simulation-based Capture the Flag (CTF) training programs tailored for securing enterprise LLM services, including hands-on labs and security agent development playgrounds.
- AI Security Incident Database and Framework – Construction of a curated database mapping AI security incidents, combined with the development of an AI-specific security framework and educational resources to help professionals manage AI-related cyber risks.
How to apply?
Complete the application form linked at the top of this page. Applications are accepted year-round and reviewed quarterly. Submission deadlines are:
- March 31
- June 30
- September 30
- December 31
Proposals are first reviewed in-house for fit and quality. Strong submissions are sent to technical advisors for further evaluation. If your proposal advances, we may follow up with written questions or a short call. We aim to make decisions within eight weeks of each deadline.
Who can apply?
We accept applications from individuals, teams, and organizations. Both non-profit and for-profit organizations are welcome to apply, but for-profits should be prepared to motivate why they need grant funding.
Funding and terms
- We award between $4.5–5.5M in total funding annually. Grants typically range from $10,000 to over $300,000, but we do not set fixed minimums or maximums. Applicants should request the amount they believe is appropriate, supported by a clear budget and scope.
- We fund both short-term and multi-year projects. For longer or higher-budget work, we may disburse funds in tranches, with later payments contingent on progress.
- We can fund overhead costs up to 10% of direct research costs, where these directly support the funded work.
- Grants are subject to basic reporting requirements. Grantees are expected to submit brief progress updates at regular intervals, describing use of funds and progress against agreed milestones.
- Tax obligations vary by country and organization type. Applicants are responsible for understanding and complying with any applicable tax requirements.
For full eligibility criteria, financial details, and documentation requirements, see our Grant Guidelines and Conditions →
Further questions or feedback?
Please contact us at [email protected]