Soenke Ziesche and Kristen Carlson will present their chapters in the recently published book Artificial Superintelligence: Coordination & Strategy (free via the link), co-edited by Roman Yampolskiy, University of Louisville, and Allison Duettmann, Foresight Institute.
We will leave ample time for socializing so you can engage with the speakers and meet other participants. During the first edition of this salon last week, participants stayed long after the official closing to continue the discussion among themselves. We love our incredible community and hope you join us too!
by Soenke Ziesche and Roman Yampolskiy
In light of fast progress in the field of AI there is an urgent demand for AI policies. Bostrom et al. provide “a set of policy desiderata”, out of which this article attempts to contribute to the “interests of digital minds”. The focus is on two interests of potentially sentient digital minds: to avoid suffering and to have the freedom of choice about their deletion. Various challenges are considered, including the vast range of potential features of digital minds, the difficulties in assessing the interests and wellbeing of sentient digital minds, and the skepticism that such research may encounter. Prolegomena to abolish suffering of sentient digital minds as well as to measure and specify wellbeing of sentient digital minds are outlined by means of the new field of AI welfare science, which is derived from animal welfare science. The establishment of AI welfare science serves as a prerequisite for the formulation of AI welfare policies, which regulate the wellbeing of sentient digital minds. This article aims to contribute to sentiocentrism through inclusion, thus to policies for antispeciesism, as well as to AI safety, for which wellbeing of AIs would be a cornerstone.
Artificial general intelligence (AGI) progression metrics indicate AGI will occur within decades. No proof exists that AGI will benefit humans and not harm or eliminate humans. A set of logically distinct conceptual components is proposed that are necessary and sufficient to (1) ensure various AGI scenarios will not harm humanity, and (2) robustly align AGI and human values and goals. By systematically addressing pathways to malevolent AI we can induce the methods/axioms required to redress them. Distributed ledger technology (DLT, “blockchain”) is integral to this proposal, e.g., “smart contracts” are necessary to address the evolution of AI that will be too fast for human monitoring and intervention. The proposed axioms: (1) Access to technology by market license. (2) Transparent ethics embodied in DLT. (3) Morality encrypted via DLT. (4) Behavior control structure with values at roots. (5) Individual bar-code identification of critical components. (6) Configuration Item (from business continuity/disaster recovery planning). (7) Identity verification secured via DLT. (8) “Smart” automated contracts based on DLT. (9) Decentralized applications—AI software modules encrypted via DLT. (10) Audit trail of component usage stored via DLT. (11) Social ostracism (denial of resources) augmented by DLT petitions. (12) Game theory and mechanism design.
Last week, we hosted the two other contributors of this book, David Manheim and Nell Watson, it was a fantastic discussion.
You can prepare for this salon by watching the video recording below:
Watch round #1, with David Manheim and Nell Watson.
Our work is entirely funded by your donations. Please consider donating (fiat, crypto, stock) to help us keep going and grow a global community with you. To collaborate closer, consider becoming a Senior Associate, Patron, or Partner, with different membership benefits.
Thank you so much!