Recently many people have updated their AGI timelines towards earlier development, raising safety concerns. That has led some people to consider whether neurotechnology, in particular WBE development (or lo-fi approaches to uploading which may be more cost-effective) could be significantly sped up, producing a differential technology development re-ordering that might lessen the risk of unaligned AGI by the presence of aligned software intelligence.
This includes exploring ideas such as:
For further information, consider skimming this workshop on the same topic, or reading some of the following – Whole Brain Emulations, A Hybrid Approach to the human-AI safety problem, Digital People Could Make AI Safer, Brain-like AGI., and BCIs and AI safety.
Explore the potential benefits of cryptography and security technologies in securing AI systems. This includes:
For further information, consider our workshop on the topic; or look in to some of the following, AI infosec: first strikes, zero-day markets, hardware supply chains, AI safety and the security mindset: user interface design, red-teams, formal verification, Boundaries-based security and AI safety approaches, Specific cryptographic and auxiliary approaches to consider for AI, Infosec Considerations for AI and the Long-term Future, Defend Against Cyber Threats, and Gaming the Future.
Explore the potential of safe multipolar AI scenarios, such as:
For further information, consider Encultured AI, Open Source Game Theory, Incomplete Contracts and AI Alignment, Paretropian Goal Alignment, and Comprehensive AI Services.
Given that short AGI timelines are a possibility, we received $1M funding per year for supporting underexplored approaches specific to this area.
Funding may be distributed to fund independent researchers, regrant to existing organizations or organizations to be incubated, or to hire/contract in-house researchers through Foresight Institute (limited research H1b support available if relocation to US desired). In order to support underexplored work, we may give priority to non-US applications, however we also welcome US applications.
This is a rolling application, and will be open for one year until the budget is depleted. We will aim to get back to you on the decision of your proposal within 8 weeks. If you require a faster turnaround, please let us know in the application form and we will aim to get back to you faster.
Please fill in the form at the top of this page to apply for funding. If we decide to move forward with your application, we will invite you for a brief interview, in which you are also welcome to ask any questions you may have. Please don’t take longer than 3 hours to fill in this application.
Projects will be evaluated by a mix of Foresight staff and external advisors. We aim to focus on projects that have a chance of being successful within short AI timelines. Rather than funding many projects with the potential of making a small difference in the long-run, we may be more inclined to fund projects that are high-risk high-reward, in the sense that they are more speculative but would make a big difference if successful. Generally, we are interested in proposals for scoping/mapping opportunities in this area, especially from a differential technology development perspective.
The tax implications of receiving funds from this grant may vary depending on your jurisdiction. We strongly recommend that you consult a tax professional to understand your specific obligations.
Agoric
Abstraction Lab
ALTER
Foresight Institute
AI Objective Institute
Possibility Research
Cooperative AI
Open AI
UCLA, Future of Humanity Institute
George Mason Universtiy
Carnegie Mellon University
Mila
University of Louisville
Metagov
UNSW Sydney
Investor and advisor
5cubeLabs
Blue Rose Research
GovAI
Vex Capital
Palisade Research
Washington University
Nectome
BrainMind
Caltech
SERI
AE Studio
SaferAI