This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Privacy Overview
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.
If you disable this cookie, we will not be able to save your preferences. This means that every time you visit this website you will need to enable or disable cookies again.
3rd Party Cookies
This website uses Google Analytics to collect anonymous information such as the number of visitors to the site, and the most popular pages.
Keeping this cookie enabled helps us to improve our website.
Please enable Strictly Necessary Cookies first so that we can save your preferences!
Foresight AI Safety grantee Abhinav Singh works on the project "SecureOps", which provides professional GenAI security training tailored for securing enterprise Large Language Model (LLM) services.
This program promotes the safe integration of public GenAI services within enterprise operations. The training adopts a Capture The Flag (CTF) style and adversary simulation exercises, covering a range of topics from LLM security fundamentals to applying custom data for developing AI-based security agents.
Discover all our AI safety grantees here: foresight.org/ai-safety/ ... See MoreSee Less
Recent Foresight AI grantee Jamie Joyce is running a project on āMapping Approaches to AI Safety via Autonomous Research Agents and AI Automated Deliberation Mapping,ā which aims to develop autonomous research agents and enhance automated debate mapping to explore the intersection of AI safety and neurotechnology comprehensively.
Explore all our AI grantees here:
foresight.org/ai-safety/ ... See MoreSee Less
In this talk from our recent Intelligent Cooperation workshop, Joshua Tan explores the idea of transforming AI into a "shared scientific endeavor" and becoming a part of the human experience, rather than being something to fear or control. He introduces the concept of Public AI, which refers to publicly funded and governed AI models and applications accessible to the public. Tan argues that Public AI offers greater equity, accessibility, and safety compared to private or open-source AI, and suggests that it could develop as a national agency, a policy scheme, or a decentralized network of publicly funded services.
While Public AI is currently a provocation within various societal, academic, and policy circles, it is gaining traction in countries such as the US, UK, Sweden, UAE, Japan, and India. To further investigate the narratives surrounding Public AI, Tan proposes launching a seminar series and invites those interested in understanding the political power of their narrative to engage.
Watch here:
foresight.org/summary/joshua-tan-ai-as-public-infrastructure-intelligent-cooperation-workshop-2024/ ... See MoreSee Less
Foresight AI grantee Harriet Farlowās project āLikelihood Analysis in AI Securityā addresses a critical gap in AI security research by quantifying the likelihood of AI incidents. While there is substantial research on the severity of AI security risks, the likelihood aspect remains underexplored. This project aims to draw from established risk assessment methodologies in cybersecurity, where risk is often defined as a function of severity and likelihood (risk = severity x likelihood).
The project aims to provide a comprehensive understanding of the probability of exploiting AI vulnerabilities by developing a robust framework for evaluating and mitigating AI security risks. This will enable better-informed decisions and strategies to safeguard AI systems, contributing significantly to AI security.
More about Foresight's AI safety grant here:
foresight.org/ai-safety/ ... See MoreSee Less
Watch Richard Ngo, from OpenAI's Policy Frontiers team, explore different types of AI alignment goals at our recent Intelligent Cooperation workshop.
Ngo categorizes them into single single alignment, single multi-alignment, multi single-alignment, and multi multi-alignment, each with unique considerations. He focuses on the goal of single single alignment and the question of which aspects of a humanās goals or values should the AI be aligned with, noting a spectrum of options, from literal instructions to idealized values. The challenge for him lies in balancing obedience and paternalism. He proposes the concept of empowerment as a principled approach to this issue, where the AI empowers users to make long-term choices and execute plans without incoherence or contradictory goals. He concludes that focusing on empowerment as a precise goal for single single alignment can help balance competing alignment objectives and avoid conflicts between user desires and AI nudges.
See the full talk here:
foresight.org/summary/richard-ngo-what-should-alignment-aim-for-intelligent-cooperation-workshop-... ... See MoreSee Less
Thank you to our grantees Joel Pyykkƶ and Roland Pihlakas who developed a multi-agent benchmarking framework to explore safe multipolar AI scenarios!
In their Aintelope project, Joel and Roland aimed to enhance AI safety in multipolar scenarios. They utilized multi-agent game simulations and game theory to develop a robust benchmarking framework for multi-agent systems. By experimenting with a biologically inspired reinforcement learning approach, they achieved significant improvements over existing industry standards in seven benchmarks. A detailed publication and a public repository of their benchmarks and methodologies are forthcoming.
More about Foresight's AI safety grant here:
foresight.org/ai-safety/ ... See MoreSee Less
Watch Anthony Aguirre discuss the future of compute governance at our recent Intelligent Cooperation workshop.
Aguirre argues that controlling the limited supply chain for AI hardware is more feasible than attempting to control easily proliferated software. He proposes a two-part solution to compute governance: first, establishing governance contracts and regulations that nations can agree upon; and second, implementing a verification and enforcement layer using hardware and software cryptographic security measures to ensure governance is enforceable.
See the full talk here: foresight.org/summary/anthony-aguirre-compute-security-governance-intelligent-cooperation-worksho... ... See MoreSee Less
Foresight Institute's AI Safety Grant recently funded a workshop that invited leading researchers to explore formalizing the notion of boundaries in complex systems.
This workshop was a major step toward extending the existing work on boundaries and establishing them as a significant new research subfield relevant to AI safety and complex systems theory.
Thank you so much, Chris Lakin, for hosting and organizing this great workshop! Discover the retrospective report of the workshop here: formalizingboundaries.substack.com/p/evans-retrospective-on-mathematical. ... See MoreSee Less
With the Feynman Prizes, Foresight Institute wishes to recognize recent and brilliant achievements that contribute deeply to the field of Nanotechnology. Nominate a project before July 31st, 2024: foresight.org/foresight-feynman-prizes/ ... See MoreSee Less
With the Feynman Prizes, Foresight Institute wishes to recognize recent and brilliant achievements that contribute deeply to the field of Nanotechnology. Nominate a project before July 31st, 2024: foresight.org/foresight-feynman-prizes/ ... See MoreSee Less
Foresight Institute is a proud partner of The Longevity Prize. The Biomarkers of Aging Challenge has a $200,000 incentive.
Discover more here:
www.longevityprize.com/prize/biomarker ... See MoreSee Less
Want to get access to our incredible community, workshops, career counselling sessions, technical groups, and mentors? Apply to be part of the 2025 Foresight Fellowship before July 31st, foresight.org/foresight-fellowships/ ... See MoreSee Less
Security is only possible if users can understand the implications of their actions. The Norm Hardy Prize is being offered to encourage work that helps users make wise decisions. The Prize will recognize work that
helps users understand, preferably tacitly, the security aspects of what they do, introduces workflows that make the secure way to do something the easy way, develops design principles for systems that are as easy or easier to use because of their security, or explores 'theory of mindā with respect to how users interact with secure systems.
The long term goal of the Norm Hardy Prize is a set of design principles and tools that encourage developers to create interaction designs that make it easy for people to use secure systems securely.
Apply before July 31,
foresight.org/norm-hardy-prize/ ... See MoreSee Less
On Sunday 14th July, at Vision Weekend Europe, Bueckeberg Palace, Germany, get ready to deep dive into bottlenecks to technological progress and mechanisms to overcome them at the āFunding Xā session with:
DNA, Protocol Labs
Barbara Diehl, SPRIND
Allison Duettmann, Foresight Institute
Molly Mackinlay, Protocol Labs
Check out the other tracks on bio, space, neuro, and AI, and sign up to the waitlist for tickets: foresight.org/vw2024eu/ ... See MoreSee Less
The Feynman Prizes are renowned for honoring outstanding work early in peoples careers. In 2007, Sir J. Fraser Stoddart won the Foresight Institute Feynman Prize in Experiment. Merely 9 years later he won the Nobel Prize in Chemistry for the design and synthesis of molecular machines!
Nominations close July 31, 2024: foresight.org/foresight-feynman-prizes/ ... See MoreSee Less