Engage / Grants / AI for Science & Safety Nodes

AI for Science & Safety Nodes

Funding, community hub, and local compute in San Francisco and Berlin.

New hubs for AI-powered science and safety

Two new hubs – in San Francisco and Berlin – offer project funding, office and community spaces, and local compute for ambitious researchers and builders who use AI to advance science and safety.

Ecosystem for decentralized, AI-driven progress

Artificial intelligence is accelerating the pace of discovery across science and technology. But today’s AI ecosystem risks centralizing compute, talent, and decision-making power – concentrating capabilities in ways that could undermine both innovation and safety.

To counter this development, we are building a decentralized network of nodes dedicated to AI-powered science and safety. Each node combines grant funding with office and community spaces, programming and in-house compute to accelerate project development. The goal is to empower researchers with a mission-aligned ecosystem where AI-driven progress remains open, secure, and aligned with human flourishing.

What the nodes offer

Use this form to apply. The next application deadline is December 31. After that, application deadlines will be at the last day of each month. The AI Nodes will open in San Francisco and Berlin in early 2026.

The rest of this page outlines the terms for our grants, and what kind of projects we are excited to support with compute and nodes access.

AI-first projects

To keep up with and leverage increasing AI capabilities, we give priority to projects that use AI as the primary engine for progress across our focus areas. The goal is to enable science and safety to accelerate in tandem with AI – for the safe and beneficial evolution of intelligence.

Focus Areas

We are excited to fund and support work in the following areas.

1. AI for Security 

Traditional security paradigms, often reactive, piecemeal and human-driven, cannot scale to match the speed, scale, and complexity of AI-supported attacks. We seek to support self-improving defense systems where AI autonomously identifies vulnerabilities, generates formal proofs, red-teams, and strengthens the world’s digital infrastructure.

2. Private AI

To ensure that AI progress occurs openly without sacrificing privacy, we want to support work that applies AI to enhance confidential compute environments, scale privacy mechanisms for handling data, and design infrastructure that distributes trust.

3. Decentralized & Cooperative AI

We fund work that builds decentralized intelligence ecosystems – where AI systems can cooperate, negotiate, and align – so societies remain resilient in a multipolar world. We are especially interested in projects that enable peaceful human–AI co-existence and create new AI-enabled mechanisms for cooperation.

4. AI for Science & Epistemics

In addition to applying AI to specific problems, we need better platforms, tools and data infrastructure to accelerate AI-guided scientific progress generally. Similarly, to get our sense-making ready for rapid change, we are interested in funding work that applies AI to improve forecasting and general epistemic preparedness.

5. AI for Neuro, Brain-Computer Interfaces & Whole Brain Emulation

We are interested in work that uses frontier models to map, simulate, and understand biological intelligence – building the foundations for hybrids between human and artificial cognition, from brain-computer interfaces to whole brain emulation. We care about this domain specifically for its potential to improve humanity’s defensive position as AI advances.

6. AI for Longevity Biotechnology

We want to fund work that applies AI to make progress on scientific frontiers in longevity biotechnology – from biostasis and replacement, to gene therapy and exosomes.

7. AI for Molecular Nanotechnology

We support work that uses AI to make progress on scientific frontiers in molecular nanotechnology – from design and simulation, to construction and assembly of nanomachines.

Grants connected to the hubs

FAQ

How much funding can be requested? 

We award at least $3M in total funding annually. Grants typically range from $10,000 to $200,000, with higher amounts being awarded to the AI safety-oriented focus areas, and smaller to longevity biotech and molecular nanotech projects.

What is required of applicants? 

To create community among mission-aligned projects, we strongly prefer applicants who plan to use the nodes in San Francisco or Berlin. We will accept “funding-only” projects only in exceptional cases.

What are the application deadlines? 

Application deadlines are on the last day of every month. We review applications on a monthly basis until the nodes are at capacity, so we recommend that you apply as soon as you are ready.

How do I apply?

By completing this application form – also linked at the top of this page.

What is the review process? 

The approximate review time is two months after each application deadline. You can request fast processing, but we may not be able to honor it. Review time for smaller funding amounts may be faster.

Proposals are first reviewed in-house for fit and quality. Strong submissions are sent to technical advisors for further evaluation. If your proposal advances, we may follow up with written questions or a short call. Unfortunately, due to the number of applications we receive, we are unable to provide individual feedback to unsuccessful applicants.

Who can apply?

We accept applications from individuals, teams, and organizations. Both non-profit and for-profit organizations are welcome to apply, but for-profits should be prepared to motivate why they need grant funding.

What are the evaluation criteria? 

Please note that the AI safety criteria do not apply to the AI for longevity biotechnology and molecular nanotechnology focus areas.

What are the funding terms? 

Further questions or feedback?

Please contact us at [email protected]

Funded before current program

Ed Boyden

Boyden Lab

Uwe Kortshagen

University of Minnesota

Konrad Kording

Kording Lab

Adam Gleave

FAR AI

Chris Lakin

Independent

Florian Tramer

ETH Zurich

Harriet Farlow

Mileva Security Labs

Abhinav Singh

University of Oxford

Apart Research

OpenMined

Benjamin Wilson

Metaculus

Lovkush Agarwal

University of Cambridge

Chandler Smith

Cooperative AI Foundation

David Bloomin

MettaAi

Jonas Emanuel Müller

Convergence Analysis

Keenan Pepper

Salesforce

Kola Ayonrinde

MATS

Nora Ammann

PIBBSS

Joel Pyykkö

Independent

Roland Pihlakas

Simplify (Macrotec LLC)

Toby D. Pilditch

University of Oxford

Leonardo Christov-Moore

Institute for Advanced Consciousness Studies

MATS Research

Bradley Love

UCL

Catalin Mitelut

NYU and University of Basel

Jamie Joyce

The Society Library

Marc Carauleanu

AE Studio

Maximillian Schons

Eon Systems

Roman Bauer

University of Surrey

Isaak Freeman

Boyden Lab

PK Douglas

Neurotrust AI

Tom Burns

SciAI Center – Cornell University

Logan Thrasher Collins

Washington University

Funders

Fund the science of the future.

Donate today