Engage / Grants / Neurotechnology for Safe AI Grant

Neurotechnology for Safe AI Grant

We are seeking proposals of neuroscience-inspired approaches to AI alignment and human capability—such as brain-computer interfaces, lo-fi brain emulations, and brain-aligned AI models.

Focus Areas

Specific work we are interested in funding:

 

  • Lo-Fi emulations and embodied cognition
  • Brain-aligned AI models
  • Secure and trustworthy neurotechnology for human-AI interaction
  • Biologically-inspired architectures and interpretability tools

Neurotechnology to integrate with, or compete against, AGI


The human brain remains our only working example of general intelligence that integrates capabilities while maintaining alignment with human values. By studying neural implementations of contextual understanding, empathy, and causal inference, we can develop AI systems with similar beneficial properties. Simultaneously, neurotechnology can help ensure humans maintain meaningful control over advanced AI systems by bridging the potentially growing cognitive gap between human and machine intelligence.

We seek proposals that leverage neuroscience and neurotechnology to address AI safety from two angles: making AI systems safer through brain-inspired approaches and enhancing human capabilities to remain relevant alongside increasingly powerful AI.

Our long-term hope is that this research prevents AI from evolving into black-box systems with alien reasoning, instead grounding development in our understanding of safe, embodied, and socially embedded “human-inspired” cognition. Early investment in open, rigorous neuro-AI research could yield lasting infrastructure for aligning intelligence with human values while maintaining human agency through augmented capabilities and more natural human-AI interaction.

What we want to fund

We are interested in promising directions including: using neural data to improve AI alignment by fine-tuning models to match brain activations; developing “lo-fi” brain emulations that capture functional aspects of human cognition; creating secure brain-computer interfaces for effective human-AI collaboration; and designing neuromorphic systems that implement specialized cognitive functions like empathy to complement mainstream AI.

Recent advances have dramatically increased feasibility in these areas. Connectomics costs have fallen; neural and behavioral recording technologies are advancing rapidly, digital twin models are on the horizon and neuroscience-informed AI models already show benefits for robustness and alignment.

Specific work we are interested in

1. Lo-Fi emulations and embodied cognition

We aim to support functionally grounded “lo-fi” brain emulations that simulate human-like cognition without full structural fidelity.

2. Brain-aligned AI models

We welcome proposals that use neural and behavioral data to fine-tune AI models toward safer, more human-compatible behavior.

3. Secure and trustworthy neurotechnology for human-AI interaction

We seek work on brain-computer interfaces (BCIs) and neurotech that augment human capabilities and enable more natural, high-bandwidth, and interpretable human-AI collaboration.

4. Biologically-inspired architectures and interpretability tools

We support efforts to model AI architectures on biological systems and to apply neuroscience methods to make AI more transparent and human-like.

Our priorities

We prioritize work that grounds advanced AI development in our best understanding of natural intelligence while preserving human agency. Projects should demonstrate a clear path toward safer, more interpretable, and more human-compatible AI systems. We especially welcome proposals that:

Previously funded work

Examples of past projects we have funded include:

How to apply?

Complete the application form linked at the top of this page. Applications are accepted year-round and reviewed quarterly. Submission deadlines are:

Proposals are first reviewed in-house for fit and quality. Strong submissions are sent to technical advisors for further evaluation. If your proposal advances, we may follow up with written questions or a short call. We aim to make decisions within eight weeks of each deadline.

Who can apply?

We accept applications from individuals, teams, and organizations. Both non-profit and for-profit organizations are welcome to apply, but for-profits should be prepared to motivate why they need grant funding.

Funding and terms

For full eligibility criteria, financial details, and documentation requirements, see our Grant Guidelines and Conditions →

Further questions or feedback?

Please contact us at [email protected]

Grantees

Ed Boyden

Boyden Lab

Uwe Kortshagen

University of Minnesota

Bradley Love

UCL

Dr. Catalin Mitelut

Netholabs

Jamie Joyce

The Society Library

Konrad Kording

Kording Lab

Marc Carauleanu

AE Studio

Maximillian Schons

Eon Systems

Roman Bauer

University of Surrey

Isaak Freeman

Massachusetts Institute of Technology

PK Douglas

University College London (Honorary)

Tom Burns

SciAI Center – Cornell University

Logan Thrasher Collins

Washington University

Made possible by

Fund the science of the future.

Donate today