AI for Science & Safety Nodes
Funding, community hub, and local compute in San Francisco and Berlin.
New hubs for AI-powered science and safety
Two new hubs – in San Francisco and Berlin – offer project funding, office and community spaces, and local compute for ambitious researchers and builders who use AI to advance science and safety.
Ecosystem for decentralized, AI-driven progress
Artificial intelligence is accelerating the pace of discovery across science and technology. But today’s AI ecosystem risks centralizing compute, talent, and decision-making power – concentrating capabilities in ways that could undermine both innovation and safety.
To counter this development, we are building a decentralized network of nodes dedicated to AI-powered science and safety. Each node combines grant funding with office and community spaces, programming and in-house compute to accelerate project development. The goal is to empower researchers with a mission-aligned ecosystem where AI-driven progress remains open, secure, and aligned with human flourishing.
What the nodes offer
- Grants. Grantees get funding, office space, and invitations to relevant workshops and events, with in-house compute available to eligible projects.
- Office/event space. Apply for (free) access to the nodes, without being a grantee.
- Compute. Apply for (free) compute for your project, without being a grantee.
Use this form to apply. The next application deadline is December 31. After that, application deadlines will be at the last day of each month. The AI Nodes will open in San Francisco and Berlin in early 2026.
The rest of this page outlines the terms for our grants, and what kind of projects we are excited to support with compute and nodes access.
AI-first projects
To keep up with and leverage increasing AI capabilities, we give priority to projects that use AI as the primary engine for progress across our focus areas. The goal is to enable science and safety to accelerate in tandem with AI – for the safe and beneficial evolution of intelligence.
Focus Areas
We are excited to fund and support work in the following areas.
1. AI for Security
Traditional security paradigms, often reactive, piecemeal and human-driven, cannot scale to match the speed, scale, and complexity of AI-supported attacks. We seek to support self-improving defense systems where AI autonomously identifies vulnerabilities, generates formal proofs, red-teams, and strengthens the world’s digital infrastructure.
2. Private AI
To ensure that AI progress occurs openly without sacrificing privacy, we want to support work that applies AI to enhance confidential compute environments, scale privacy mechanisms for handling data, and design infrastructure that distributes trust.
3. Decentralized & Cooperative AI
We fund work that builds decentralized intelligence ecosystems – where AI systems can cooperate, negotiate, and align – so societies remain resilient in a multipolar world. We are especially interested in projects that enable peaceful human–AI co-existence and create new AI-enabled mechanisms for cooperation.
4. AI for Science & Epistemics
In addition to applying AI to specific problems, we need better platforms, tools and data infrastructure to accelerate AI-guided scientific progress generally. Similarly, to get our sense-making ready for rapid change, we are interested in funding work that applies AI to improve forecasting and general epistemic preparedness.
5. AI for Neuro, Brain-Computer Interfaces & Whole Brain Emulation
We are interested in work that uses frontier models to map, simulate, and understand biological intelligence – building the foundations for hybrids between human and artificial cognition, from brain-computer interfaces to whole brain emulation. We care about this domain specifically for its potential to improve humanity’s defensive position as AI advances.
6. AI for Longevity Biotechnology
We want to fund work that applies AI to make progress on scientific frontiers in longevity biotechnology – from biostasis and replacement, to gene therapy and exosomes.
7. AI for Molecular Nanotechnology
We support work that uses AI to make progress on scientific frontiers in molecular nanotechnology – from design and simulation, to construction and assembly of nanomachines.
Grants connected to the hubs
- Grantees are invited to build together in Berlin or San Francisco. To create community among mission-aligned projects, we strongly prioritize applicants who want to be an active part of our spaces (free of charge). We will accept “funding-only” projects only in exceptional cases.
- Grantees are invited to events advancing the frontier of their field. Grantees are expected to join one of our travel-paid workshops in Berlin or San Francisco to connect with other grantees who are building relevant projects. In addition, you can propose and expect plenty of other events, sprints, and other collaborations in the nodes throughout the year.
- Local, private compute is available for eligible projects. Tell us in the application how much compute you need and for what purpose, and we will provide eligible projects with a compute budget or access to local compute resources, especially for privacy-oriented projects.
FAQ
How much funding can be requested?
We award at least $3M in total funding annually. Grants typically range from $10,000 to $200,000, with higher amounts being awarded to the AI safety-oriented focus areas, and smaller to longevity biotech and molecular nanotech projects.
What is required of applicants?
To create community among mission-aligned projects, we strongly prefer applicants who plan to use the nodes in San Francisco or Berlin. We will accept “funding-only” projects only in exceptional cases.
What are the application deadlines?
Application deadlines are on the last day of every month. We review applications on a monthly basis until the nodes are at capacity, so we recommend that you apply as soon as you are ready.
How do I apply?
By completing this application form – also linked at the top of this page.
What is the review process?
The approximate review time is two months after each application deadline. You can request fast processing, but we may not be able to honor it. Review time for smaller funding amounts may be faster.
Proposals are first reviewed in-house for fit and quality. Strong submissions are sent to technical advisors for further evaluation. If your proposal advances, we may follow up with written questions or a short call. Unfortunately, due to the number of applications we receive, we are unable to provide individual feedback to unsuccessful applicants.
Who can apply?
We accept applications from individuals, teams, and organizations. Both non-profit and for-profit organizations are welcome to apply, but for-profits should be prepared to motivate why they need grant funding.
What are the evaluation criteria?
- Impact on reducing existential risks from AI: the extent to which the project can reduce existential risks associated with AI, focusing on achieving significant advancements in AI safety within short timelines.
- Feasibility within short AGI timelines: the project’s ability to achieve meaningful progress within the anticipated short timeframes for AGI development. We prioritize projects that can demonstrate concrete milestones and deliverables in the next 1-3 years.
- Alignment with our focus areas: the degree to which the project addresses one or more of the focus areas outlined on this page.
- Capability to execute: the qualifications, experience, and resources of the applicant(s) to successfully carry out the proposed work. Strong teams with proven expertise in AI safety or related fields will be prioritized.
- High-risk, high-reward potential: the level of risk involved in the project, balanced with the potential for substantial, transformative impact on the future of AI safety. We encourage speculative, high-risk projects with the potential to drive significant change if successful.
- Preference for open source: We prefer open source projects, unless there are specific reasons preventing it.
Please note that the AI safety criteria do not apply to the AI for longevity biotechnology and molecular nanotechnology focus areas.
What are the funding terms?
- We fund both short-term and longer projects. Grants are typically paid in one lump sum. However, for larger projects spanning multiple years, payments may be made in tranches, with each subsequent tranche contingent upon the successful completion and reporting of previous milestones.
- We can fund overhead costs up to 10% of direct research costs, where these directly support the funded work.
- Successful applicants must pass our due diligence process, which includes confirming your connections to Foresight Institute, sharing any ongoing criminal proceedings, bankruptcy, etc., and sharing an itemized budget, project plan and organizational documents.
- By accepting funding, grantees agree that we may list their project on our website and share the project title and project lead on for example social media. If you prefer for your project to remain private, please inform us.
- Grants are subject to basic reporting requirements. Grantees are expected to submit brief progress updates at regular intervals, describing use of funds and progress against agreed milestones.
- Tax obligations vary by country and organization type. Applicants are responsible for understanding and complying with any applicable tax requirements.
Further questions or feedback?
Please contact us at [email protected]
Grantees
Funded before current program
Uwe Kortshagen
University of Minnesota
Florian Tramer
ETH Zurich
Harriet Farlow
Mileva Security Labs
Apart Research
OpenMined
Benjamin Wilson
Metaculus
Lovkush Agarwal
University of Cambridge
Chandler Smith
Cooperative AI Foundation
Jonas Emanuel Müller
Convergence Analysis
Kola Ayonrinde
MATS
Joel Pyykkö
Independent
Toby D. Pilditch
University of Oxford
Leonardo Christov-Moore
Institute for Advanced Consciousness Studies
MATS Research
Maximillian Schons
Eon Systems
Isaak Freeman
Boyden Lab
PK Douglas
Neurotrust AI