Next application deadline: September 30th
In this area, we seek proposals that leverage neuroscience and neurotechnology to address AI safety from two angles: making AI systems safer through brain-inspired approaches and enhancing human capabilities to remain relevant alongside increasingly powerful AI.
The human brain remains our only working example of general intelligence that integrates capabilities while maintaining alignment with human values. By studying neural implementations of contextual understanding, empathy, and causal inference, we can develop AI systems with similar beneficial properties. Simultaneously, neurotechnology can help ensure humans maintain meaningful control over advanced AI systems by bridging the potentially growing cognitive gap between human and machine intelligence.
We are interested in promising directions including: using neural data to improve AI alignment by fine-tuning models to match brain activations; developing “lo-fi” brain emulations that capture functional aspects of human cognition; creating secure brain-computer interfaces for effective human-AI collaboration; and designing neuromorphic systems that implement specialized cognitive functions like empathy to complement mainstream AI.
Recent advances have dramatically increased feasibility in these areas. Connectomics costs have fallen; neural and behavioral recording technologies are advancing rapidly, digital twin models are on the horizon and neuroscience-informed AI models already show benefits for robustness and alignment.
Our long-term hope is that this research prevents AI from evolving into black-box systems with alien reasoning, instead grounding development in our understanding of safe, embodied, and socially embedded “human-inspired” cognition. Early investment in open, rigorous neuro-AI research could yield lasting infrastructure for aligning intelligence with human values while maintaining human agency through augmented capabilities and more natural human-AI interaction.
We aim to support functionally grounded “lo-fi” brain emulations that simulate human-like cognition without full structural fidelity.
We welcome proposals that use neural and behavioral data to fine-tune AI models toward safer, more human-compatible behavior.
We seek work on brain-computer interfaces (BCIs) and neurotech that augment human capabilities and enable more natural, high-bandwidth, and interpretable human-AI collaboration.
We support efforts to model AI architectures on biological systems and to apply neuroscience methods to make AI more transparent and human-like.
We prioritize work that grounds advanced AI development in our best understanding of natural intelligence while preserving human agency. Projects should demonstrate a clear path toward safer, more interpretable, and more human-compatible AI systems.
We especially welcome proposals that:
Examples of past projects in this area include:
Massachusetts Institute of Technology
University of Minnesota
University College London
The Society Library
University of Pennsylvania
University of Surrey
Massachusetts Institute of Technology
University College London (Honorary)
Okinawa Institute of Science and Technology
Washington University