10 Weeks Daily Global Salons: Thank You & What’s Next
Dear community,
With the start of shelter in place in March, we started hosting daily online salons to explore pathways for beneficial long term futures in these unprecedented times. The rest is now Foresight history:
Observing how many kept joining regularly & engaging with us, we extended week after week – for a total of 10 weeks of daily meetings.
We couldn’t have wished for a better way to spend this time than making sense of this new reality with our community.
The resulting 60+ videos will be imminently uploaded on our website and are available on our YouTube channel, in this playlist.
This daily series is coming to an end, however – our hope is for it to be a new beginning, for something even bigger.
Stay put. You will hear from us soon.
In the meantime, to engage with our vibrant online community:
David Manheim and Nell Watson will present their chapters in the recently published book Artificial Superintelligence: Coordination & Strategy (free via the link).
We’ll open with a brief introduction to the book by its co-editors, Roman Yampolskiy and Allison Duettmann, followed by Nell Watson and David Manheim presenting their contributions, summarized below. We’ll also create time for socializing so you can meet the speakers and other participants.
An important challenge for safety in machine learning and artificial intelligence systems is a set of related failures involving specification gaming, reward hacking, fragility to distributional shifts, and Goodhart’s or Campbell’s law. This paper presents additional failure modes for interactions within multi-agent systems that are closely related. These multi-agent failure modes are more complex, more problematic, and less well understood than the single-agent case, and are also already occurring, largely unnoticed. After motivating the discussion with examples from poker-playing artificial intelligence (AI), the paper explains why these failure modes are in some senses unavoidable. Following this, the paper categorizes failure modes, provides definitions, and cites examples for each of the modes: accidental steering, coordination failures, adversarial misalignment, input spoofing and filtering, and goal co-option or direct hacking. The paper then discusses how extant literature on multi-agent AI fails to address these failure modes, and identifies work which may be useful for the mitigation of these failure modes.
This article looks at the problem of moral singularity in the development of artificial intelligence. We are now on the verge of major breakthroughs in machine technology where autonomous robots that can make their own decisions will become an integral part of our way of life. This article presents a qualitative, comparative approach, which considers the differences between humans and machines, especially in relation to morality, and is grounded in historical and contemporary examples. This argument suggests that it is difficult to apply models of human morality and evolution to machines and that the creation of super-intelligent robots that will be able to make moral decisions could have potentially serious consequences. A runaway moral singularity could result in machines seeking to confront human moral transgressions in a quest to eliminate all forms of evil. This might also culminate in an all-out war in which humanity might be defeated.
Our work is entirely funded by your donations. Please consider donating (fiat, crypto, stock) to help us keep going and grow a global community with you. To collaborate closer, consider becoming a Senior Associate, Patron, or Partner, with different membership benefits.
Thank you so much!