Artificial General Intelligence: Coordination & Great Powers
The 2018 AGI strategy meeting is based on the observation that to make progress on AI safety requires making progress in several sub-domains, including ethics, technical alignment, cybersecurity, and coordination (Duettmann, 2018). Ethics, technical alignment, and cybersecurity all contain a number of very hard problems. Given that solving those problems requires time, ensuring coordination among actors that allows for cooperation on those problems, while avoiding race-dynamics that lead to corner-cutting on safety issues is a primary concern on the path to AI safety. Coordination is itself a very hard problem but making progress on coordination would be beneficial for ethics, technical alignment, and cybersecurity concerns also. Since coordination for AGI safety can involve, at least partly, existing entities and actors, and there already exists theoretical literature and historical precedents of other coordination problems from which we can learn, coordination is a goal that we can effectively work toward today. While the meeting’s focus was on AGI coordination, most other anthropogenic risks, e.g., arising from potential biotechnology weapons, require coordination as well. This report highlights potential avenues for progress.