Request for Proposals: Automating Research and Forecasting
We are seeking proposals of work which uses AI to automate scientific research workflows and generate reliable forecasts of technological change. This includes open-source assistants, continuous modeling systems, and infrastructure for collaborative forecasting.
Focus Areas
Specific work we are interested in funding:
-
Open-source AI research assistants
-
Automated forecasting systems
Automating research and forecasting
As AI systems grow more capable, they can dramatically accelerate discovery across fields like neuroscience, biology, and materials science, while also helping us understand and steer the trajectory of AI development.
Early choices in how we automate research may define the norms for how AI is used to generate knowledge across domains for decades to come. By supporting open-source, pro-social approaches to research and forecasting automation, we aim to build a more distributed, transparent, and trustworthy innovation ecosystem.
What we want to fund
We are interested in proposals that use AI to automate scientific research and improve our ability to forecast key technological developments—especially those related to AI itself. We are particularly interested in tools that automate parts of beneficial AI R&D—such as model evaluation, alignment research, interpretability, and benchmark generation. We also consider R&D automation across other scientific disciplines, especially if it has the potential to be transferable/generalizable across domains and empower human understanding in the research process.
With respect to forecasting and modeling, we believe that improving our forecasting capability could create opportunities for engineering safer paths between the technological realities of today and the possibilities of tomorrow. Today, human forecasting, modeling, and simulation progress is bottlenecked because it simply is not practical to establish and continuously update manual human forecasts and models on all technological domains of interest and their complex interplays. AI-enabled forecasts, models and simulations don’t face these opportunity costs and could be run on around the clock on a broader range of interrelated factors.
Specific work we are interested in
1. Open-source AI research assistants
We aim to fund tools that automate key parts of scientific research—like reading papers, generating hypotheses, or designing and executing experiments—through open-source assistants that can be adapted across domains.
- Development of open-source AI scientists to automate core research workflows—such as literature review, hypothesis generation, experiment design and execution, and result evaluation—in areas like AI safety, neuroscience, and biology, among others.
- Infrastructure to collect and share intermediate research products (e.g. working notes, annotations) to support future training and fine-tuning.
- Systems that enable collaborative research between humans and AI systems, allowing human or AI teams to interoperate and iteratively co-develop and validate research outputs.
2. Automated forecasting systems
We are looking for systems that use AI to generate, compare, and collaborate on forecasts on critical developments—such as AI capabilities, regulation, or biosecurity risks—and present them in ways that builders and decision-makers can act on.
- AI forecasting and modeling systems that operate continuously to track and predict critical developments in science, technology, and global risk.
- Tools and benchmarks for aggregating, calibrating, and evaluating forecasts from multiple AI systems.
- Tools for past-casting: training and testing models only on historical data to simulate long-range forecasting reliability.
- Forecasting and modeling tools designed for institutional use, with clear audit trails, source tracing, and version control.
- Systems that generate large volumes of self-resolving forecast questions to fuel training and evaluation at scale.
- Collaborative approaches that allow for various forecasting and modeling systems to inform each other to generate complex, conditional, wisdom of the crowd-style forecasts across fields.
If you have a proposal which falls within research and forecasting automation, but does not align with the specific work we have outlined here, you are still welcome to apply. However, please note that such proposals are held to a significantly higher bar. We do not accept proposals that fall outside this area.
Our priorities
Proposals should clearly show how the work will promote civilization-defending, human-empowering use of AI, with particular attention to reducing existential risks from advanced AI systems. We prioritize projects that:
- Focus on rapid iteration of prototypes, pragmatism and a clear path to scaling workable solutions fast to impact real-world AI development within short timelines.
- Democratize access to research and forecasting tools, especially through open-source methods that prevent centralization of power.
- Pair technical advances with safety considerations, addressing dual-use risks and including strategies to prevent misuse.
- Promote epistemic robustness, such as better reasoning, calibration, and transparency in automated systems.
Previously funded work
Examples of past projects we have funded include:
- Forecasting Bot & Research Tools – An open-source assistant to help forecasters reason better and produce more accurate predictions.
- AI Agent Debate for Forecasting – Testing whether deliberation between diverse AI assistants improves forecast accuracy and calibration.
- AI Safety Automation Sprint – Prototyping tools to automate parts of the AI safety research pipeline (e.g. idea generation, coding).
- Autonomous Research Assistants – Building AI assistants to explore intersections of AI safety and neurotech/security domains.
- Replicability Scoring via LLMs – Using large language models to assess the reliability of scientific findings and support trustworthy research.
How to apply?
Complete the application form linked at the top of this page. Applications are accepted year-round and reviewed quarterly. Submission deadlines are:
- March 31
- June 30
- September 30
- December 31
Proposals are first reviewed in-house for fit and quality. Strong submissions are sent to technical advisors for further evaluation. If your proposal advances, we may follow up with written questions or a short call. We aim to make decisions within eight weeks of each deadline.
Who can apply?
We accept applications from individuals, teams, and organizations. Both non-profit and for-profit organizations are welcome to apply, but for-profits should be prepared to motivate why they need grant funding.
Funding and terms
- We award between $4.5–5.5M in total funding annually. Grants typically range from $10,000 to over $300,000, but we do not set fixed minimums or maximums. Applicants should request the amount they believe is appropriate, supported by a clear budget and scope.
- We fund both short-term and multi-year projects. For longer or higher-budget work, we may disburse funds in tranches, with later payments contingent on progress.
- We can fund overhead costs up to 10% of direct research costs, where these directly support the funded work.
- Grants are subject to basic reporting requirements. Grantees are expected to submit brief progress updates at regular intervals, describing use of funds and progress against agreed milestones.
- Tax obligations vary by country and organization type. Applicants are responsible for understanding and complying with any applicable tax requirements.
For full eligibility criteria, financial details, and documentation requirements, see our Grant Guidelines and Conditions →
Further questions or feedback?
Please contact us at [email protected]
Grantees

Benjamin Wilson
Metaculus

Lovkush Agarwal
University of Cambridge