In this area, we would like to see proposals that use AI to automate scientific research and improve our ability to forecast key technological developmentsāespecially those related to AI itself. As AI systems grow more capable, they can dramatically accelerate discovery across fields like neuroscience, biology, and materials science, while also helping us understand and steer the trajectory of AI development.
We are particularly interested in tools that automate parts of beneficial AI R&Dāsuch as model evaluation, alignment research, interpretability, and benchmark generation. We also consider R&D automation across other scientific disciplines, especially if it has the potential to be transferable/generalizable across domains and empower human understanding in the research process.
With respect to forecasting and modeling, we believe that improving our forecasting capability could create opportunities for engineering safer paths between the technological realities of today and the possibilities of tomorrow. Today, human forecasting, modeling, and simulation progress is bottlenecked because it simply is not practical to establish and continuously update manual human forecasts and models on all technological domains of interest and their complex interplays. AI-enabled forecasts, models and simulations donāt face these opportunity costs and could be run on around the clock on a broader range of interrelated factors.
Early choices in how we automate research may define the norms for how AI is used to generate knowledge across domains for decades to come. By supporting open-source, pro-social approaches to research and forecasting automation, we aim to build a more distributed, transparent, and trustworthy innovation ecosystem.
We aim to fund tools that automate key parts of scientific researchālike reading papers, generating hypotheses, or designing and executing experimentsāthrough open-source assistants that can be adapted across domains.
We are looking for systems that use AI to generate, compare, and collaborate on forecasts on critical developmentsāsuch as AI capabilities, regulation, or biosecurity risksāand present them in ways that builders and decision-makers can act on.
Proposals should clearly show how the work will promote civilization-defending, human-empowering use of AI, with particular attention to reducing existential risks from advanced AI systems.
We prioritize projects that:
Examples of past projects in this area include:
Metaculus
University of Cambridge
Self-operating Calculations of Risk for Yielding Accurate Insights will develop an open-source forecasting bot powered by AI-driven research tools to enhance forecasting accuracy and volume. The bot and tools will support human reasoning while providing a foundation for developers working on forecasting automation, fostering innovation and further advancements in the field.
This project aims to automate at least one part of the AI Safety research pipeline, such as generating or refining research ideas or writing research code. It also investigates how AI Safety researchers use AI tools and designs solutions to address any blockers identified.