The Trajectory of Civilization: Extinction, Race to the Bottom, or Upward Climbing

Loading Events
  • This event has passed.

This was a debate hosted by Foresight Institute between
Robin D. Hanson (George Mason, Future of Humanity Institute)
Paul Christiano (OpenAI)
Christine Peterson (Foresight Institute)
Peter Eckersley (Electronic Frontier Foundation)
Mark S. Miller (Foresight Institute, Agoric)
and Alyssa Vance (Apprente)
moderated by Allison Duettmann (Foresight Institute)

on the topic:

The Future of Civilization: Extinction, Race To The Bottom or Upward Climbing?

Some talking points:

evaluating the long-term future: value drift vs. entrenchment of social values, moral predictions

the default evolution of civilization: X-risks, race to the bottom after current Dreamtime vs. generally positive trajectory (Enlightenment Now)

whether rationally-planned gardens, à la Raikoth (Scott Alexander), or a decentralized Pareto-topia (Eric Drexler) is closer to Utopia

comparing decentralized, open, multipolar systems vs. centralized unipolar singletons as governance framework for AI development and beyond

speed and characteristic of AI development given recent developments, e.g. AlphaGo Zero, Alpha Zero (as update on AI Foom debate)

WBE’s, their value for AI safety, and their own risk factor due to economics (Age of Em, The Future of Human Evolution)

general problems and strategies in incentivizing actors due to hidden social incentives (Inadequate Equilibria, The Elephant in The Brain)

—–

Some fun pieces that this is based on:

Paul Christiano writes about the probability of civilizational survival, and the relative influence, distribution and entrenchment of human values in My Outlook, and about values in The Golden Rule.

Robin Hanson wrote a piece on Dreamtime and races to the bottom, this piece on Civilization, a piece on Value Drift and Age of Em. Then there’s the AI Foom debate.

Mark Miller, Christine and Allison Duettmann wrote a piece on Decentralized Approaches to Reducing Existential Risks, arguing that civilization is a superintelligence that tends to grow in tropisms that serve human interests by enabling voluntary interactions that are Pareto-preferred by individuals.

Eliezer Yudkowsky wrote the book Inadequate Equilibria on the mechanisms by which civilizations get stuck, especially via incentive misalignment for decision-makers, asymmetric information, and suboptimal Nash Equilibria. Also, a post on corporations vs. superintelligences.

Scott Alexander wrote Meditations on Moloch, positioning the notion of multipolar traps against praise of polycentrism and multipolarity.

Nick Bostrom wrote a piece on The Future of Human Evolution, and there’s Anna Salamon’s and Carl Schulman’s talk on WBE’s for safe AGI.

All readings can be found on https://www.existentialhope.com/,
Get involved with Foresight Institute at https://foresight.org/