Resources / Podcasts / Andrew Critch on How AI Learns from Culture (and Why It Matters)

Podcast

Andrew Critch on How AI Learns from Culture (and Why It Matters)

With Andrew Critch


Date

When people think about AGI, most of them ask “When is it going to arrive?” or “What kind of AGI will we get?”. Andrew Critch, AI safety researcher and mathematician, argues that the most important question is actually “What will we do with it?”

In our conversation, we explore the importance of our choices in the quest to make AGI a force for good. Andrew explains what AGI might look like in practical terms, and the consequences of it being trained on our culture. He also claims that finding the “best” values AI should have is a philosophical trap, and that we should instead focus on finding a basic agreement about “good” vs. “bad” behaviors. The episode also covers concrete takes on the transition to AGI, including:

  • Why an advanced intelligence would likely find killing humans “mean.”
  • How automated computer security checks could be one of the best uses of powerful AI.
  • Why the best preparation for AGI is simply to build helpful products today.

Fund the science of the future.

Donate today