Neurotechnology for Safe AI Grant
We are seeking proposals of neuroscience-inspired approaches to AI alignment and human capability—such as brain-computer interfaces, lo-fi brain emulations, and brain-aligned AI models.
Focus Areas
Specific work we are interested in funding:
-
Lo-Fi emulations and embodied cognition
-
Brain-aligned AI models
-
Secure and trustworthy neurotechnology for human-AI interaction
-
Biologically-inspired architectures and interpretability tools
Neurotechnology to integrate with, or compete against, AGI
The human brain remains our only working example of general intelligence that integrates capabilities while maintaining alignment with human values. By studying neural implementations of contextual understanding, empathy, and causal inference, we can develop AI systems with similar beneficial properties. Simultaneously, neurotechnology can help ensure humans maintain meaningful control over advanced AI systems by bridging the potentially growing cognitive gap between human and machine intelligence.
We seek proposals that leverage neuroscience and neurotechnology to address AI safety from two angles: making AI systems safer through brain-inspired approaches and enhancing human capabilities to remain relevant alongside increasingly powerful AI.
Our long-term hope is that this research prevents AI from evolving into black-box systems with alien reasoning, instead grounding development in our understanding of safe, embodied, and socially embedded “human-inspired” cognition. Early investment in open, rigorous neuro-AI research could yield lasting infrastructure for aligning intelligence with human values while maintaining human agency through augmented capabilities and more natural human-AI interaction.
What we want to fund
We are interested in promising directions including: using neural data to improve AI alignment by fine-tuning models to match brain activations; developing “lo-fi” brain emulations that capture functional aspects of human cognition; creating secure brain-computer interfaces for effective human-AI collaboration; and designing neuromorphic systems that implement specialized cognitive functions like empathy to complement mainstream AI.
Recent advances have dramatically increased feasibility in these areas. Connectomics costs have fallen; neural and behavioral recording technologies are advancing rapidly, digital twin models are on the horizon and neuroscience-informed AI models already show benefits for robustness and alignment.
Specific work we are interested in
1. Lo-Fi emulations and embodied cognition
We aim to support functionally grounded “lo-fi” brain emulations that simulate human-like cognition without full structural fidelity.
- Build cognitive and behavioral models based on neural and behavioral data to build predictive models of model organisms.
- Solve a partial problem in the digital twin space by enabling technology to bring down data generation costs or improve prediction of out-of-distribution behavior.
- Create approaches for using multimodal ML to make sense across a diversity of neural and behavioral input data.
- Apply these models to domains where safe, embodied cognition can inform the development of aligned AI systems.
2. Brain-aligned AI models
We welcome proposals that use neural and behavioral data to fine-tune AI models toward safer, more human-compatible behavior.
- Use large-scale fMRI, EEG, and behavioral datasets to train or regularize AI systems for improved alignment with human values and reasoning.
- Develop pipelines that allow real-time or offline integration of human neural signals into AI training and evaluation.
- Explore the effectiveness of brain-aligned fine-tuning in enhancing robustness, interpretability, and pro-social behavior in AI.
3. Secure and trustworthy neurotechnology for human-AI interaction
We seek work on brain-computer interfaces (BCIs) and neurotech that augment human capabilities and enable more natural, high-bandwidth, and interpretable human-AI collaboration.
- Build open-source, privacy-preserving BCIs that enhance bidirectional communication between humans and AI.
- Develop neuroadaptive systems that adjust AI behavior based on human neural feedback for shared control, trust calibration, or emotional resonance.
- Build hybrid systems where neurotechnology helps mediate or supervise AI decision-making in high-stakes settings.
4. Biologically-inspired architectures and interpretability tools
We support efforts to model AI architectures on biological systems and to apply neuroscience methods to make AI more transparent and human-like.
- Develop neuromorphic co-processors or simulated modules based on specialized brain functions like empathy or social cognition.
- Use tools from neuroscience (e.g. circuit tracing, representational similarity analysis) to study and improve interpretability of both brains and AIs.
- Create infrastructure and datasets to link connectomics, simulation, and AI safety research.
Our priorities
We prioritize work that grounds advanced AI development in our best understanding of natural intelligence while preserving human agency. Projects should demonstrate a clear path toward safer, more interpretable, and more human-compatible AI systems. We especially welcome proposals that:
- Develop neurotechnologies that enhance human capabilities and support meaningful human-AI collaboration.
- Translate neuroscience insights into rapidly scalable tools for AI alignment and interpretability.
- Use pragmatic, functional approaches—like lo-fi emulation—over costly long-term research.
- Build open, privacy-preserving infrastructure for neuro-AI research that resists misuse and centralization.
- Bridge disciplines across neuroscience, AI safety, and cognitive science to tackle shared challenges.
Previously funded work
Examples of past projects we have funded include:
- Neural Development WBE Simulation – A novel approach to whole brain emulation that models how a mature brain develops from a single precursor cell, providing a new pathway to functionally faithful brain emulations.
- Biologically-Inspired AI via Canonical Microcircuits – A project combining computational neuroscience with AI to simulate human-like cognition and behavior using biologically plausible architectures based on canonical microcircuits.
- Connectomics Protocol for Brain Mapping – Development of a standardized protocol for light-based whole-brain connectomics, dramatically lowering the cost and time required to acquire neural data critical for brain-informed AI development.
- Hierarchical Neural ODE Models – Creation of recurrent neural architectures based on canonical brain microcircuits and differential equations, bridging microscale dynamics with system-level cognition to inform interpretable and human-aligned AI design.
- Neurotech Mapping Infrastructure – Customizing an automated debate-mapping platform to map the neurotech landscape, enabling funders and researchers to navigate field claims, actors, and subtopics relevant to brain-based AI safety.
How to apply?
Complete the application form linked at the top of this page. Applications are accepted year-round and reviewed quarterly. Submission deadlines are:
- March 31
- June 30
- September 30
- December 31
Proposals are first reviewed in-house for fit and quality. Strong submissions are sent to technical advisors for further evaluation. If your proposal advances, we may follow up with written questions or a short call. We aim to make decisions within eight weeks of each deadline.
Who can apply?
We accept applications from individuals, teams, and organizations. Both non-profit and for-profit organizations are welcome to apply, but for-profits should be prepared to motivate why they need grant funding.
Funding and terms
- We award between $4.5–5.5M in total funding annually. Grants typically range from $10,000 to over $300,000, but we do not set fixed minimums or maximums. Applicants should request the amount they believe is appropriate, supported by a clear budget and scope.
- We fund both short-term and multi-year projects. For longer or higher-budget work, we may disburse funds in tranches, with later payments contingent on progress.
- We can fund overhead costs up to 10% of direct research costs, where these directly support the funded work.
- Grants are subject to basic reporting requirements. Grantees are expected to submit brief progress updates at regular intervals, describing use of funds and progress against agreed milestones.
- Tax obligations vary by country and organization type. Applicants are responsible for understanding and complying with any applicable tax requirements.
For full eligibility criteria, financial details, and documentation requirements, see our Grant Guidelines and Conditions →
Further questions or feedback?
Please contact us at [email protected]
Grantees

Ed Boyden
Boyden Lab

Uwe Kortshagen
University of Minnesota

Bradley Love
UCL

Dr. Catalin Mitelut
Netholabs

Konrad Kording
Kording Lab

Marc Carauleanu
AE Studio

Maximillian Schons
Eon Systems

Isaak Freeman
Massachusetts Institute of Technology

PK Douglas
University College London (Honorary)

Tom Burns
SciAI Center – Cornell University