Norvig starts with the history of the textbook.  In 1990, the textbooks were subpar.  AI was changing in three ways – moving from logic to probability, from handcoded knowledge to machine learning, and from duplicating human systems toward normative systems that got the best answer no matter what.  After leaving Berkeley to go to Sun, he helped write a new textbook about AI.


In software engineering, the main enemy is complexity.  In AI the main enemy is uncertainty.  Reasoning with uncertainty and interacting with the environment were the key points of the new AI textbook.  The newest version of the textbook covers a lot about deep learning as well.  The difficult part of AI is not the algorithms, it’s deciding what you want to optimize.  Ethics, fairness, privacy, diversity equity, and lethal autonomous weapons are taking a more prominent place in the discussion about AI.  As the field has changed, the students have also changed.  AI is now a requirement rather than an elective.  The newest edition streamlines the content to be more accessible.


AI: A Modern Approach


Key Points from the Q&A Session


  • A practical, rather than philosophical, understanding of intelligence helps us solve problems
  • AI could help assess college applications to create ideal diversity of ideologies across an entire newly incoming class.  Alternative viewpoints can spawn efficient solutions to complex problems.
  • From Google’s perspective, filter bubbles are not as big a problem as people think.  Facebook and other companies have a harder time dealing with it due to the nature of social media.
  • Robots taking over the world is not a critical problem, unintended effects of AI are.  Cheap surveillance for totalitarian governments.
  • The definition of AI may be too broad, but what really matters is how the communities working on AI interact with each other
  • Decentralized politics are not a big concern for Google.  They focus more on data privacy.
  • We are already living in a world with nonhuman entities – corporations and governments.  We can’t understand them completely but we have some predictions of what they will do.  The same is going to be true for AI.  The danger is from the rate of change, AI will be capable of much faster change.
  • Computer science has become more of a natural science.  It’s too complex now to simply focus on proofs and mathematics.
  • Building trustworthy systems is important.
  • Criticisms of the “approach of maximizing utility” tend to ignore externalities.  Taking a broader view of the utilitarian systems tends to solve the paradoxes that spring up.
  • AI is a compliment rather than a substitute for human labor in the economy.  It’s a tool that helps people get their jobs done.
  • People in AI have often rediscovered things that are already known, we should do a better job at background research.