• About Us
    • About Us
    • Team
    • Directors & Advisors
    • Youtube Channel
    • Podcast
    • Newsletters
    • Open Positions
    • Foresight Institute Blog
    • Contact
  • Seminars
    • Nanotech: Molecular Machines
    • Biotech: Health Extension
    • Computation: Intelligent Cooperation
    • Neurotech: Improving Cognition
    • Space: Expanding Outward
    • Foresight: Existential Hope
      • Existential Hope Page
  • Prizes & Fellowships
    • Foresight Accelerators
    • The Longevity Prize
    • Foresight Feynman Prizes
      • Press Release for Winners 2022
    • Mentors & Fellows
  • Events
    • Personal Longevity Group
    • Global Meetups
    • Past Member Gatherings
    • 2023 Foresight Existential Hope Day
    • 2023 Longevity Frontiers Workshop
    • 2023 Whole Brain Emulation Workshop
    • 2023 Foresight Space Workshop
    • 2023 Intelligent Voluntary Cooperation Workshop
    • 2023 Foresight Molecular Systems Design Workshop
    • 2023 Vision Weekends
  • Publications
    • Nanotech: Molecular Machines
    • Biotech: Health Extension
    • Computation: Intelligent Cooperation
  • Foresight X
    • Existential Hope
    • Gaming the Future
    • Tech Trees
  • Donate & Join
Menu
  • About Us
    • About Us
    • Team
    • Directors & Advisors
    • Youtube Channel
    • Podcast
    • Newsletters
    • Open Positions
    • Foresight Institute Blog
    • Contact
  • Seminars
    • Nanotech: Molecular Machines
    • Biotech: Health Extension
    • Computation: Intelligent Cooperation
    • Neurotech: Improving Cognition
    • Space: Expanding Outward
    • Foresight: Existential Hope
      • Existential Hope Page
  • Prizes & Fellowships
    • Foresight Accelerators
    • The Longevity Prize
    • Foresight Feynman Prizes
      • Press Release for Winners 2022
    • Mentors & Fellows
  • Events
    • Personal Longevity Group
    • Global Meetups
    • Past Member Gatherings
    • 2023 Foresight Existential Hope Day
    • 2023 Longevity Frontiers Workshop
    • 2023 Whole Brain Emulation Workshop
    • 2023 Foresight Space Workshop
    • 2023 Intelligent Voluntary Cooperation Workshop
    • 2023 Foresight Molecular Systems Design Workshop
    • 2023 Vision Weekends
  • Publications
    • Nanotech: Molecular Machines
    • Biotech: Health Extension
    • Computation: Intelligent Cooperation
  • Foresight X
    • Existential Hope
    • Gaming the Future
    • Tech Trees
  • Donate & Join

Gaming the Future: The Book!

Zhu Xiaohu, Center for Safe AGI | Ontological anti-crisis and AI safety

  • November 24, 2021

Presenters

Zhu Xiaohu, Center for Safe AGI

 

Xiaohu (Neil) Zhu is the Founder and Chief Scientist of University AI, an organization providing AI education and training for individuals and big companies in China. He got a master degree on AI in Nanjing University with a background on algorithmic game theory, mathematical logic, deep learning, and reinforcement learning. He started the investigation on AGI/AI safety in 2016 and now focuses on…

 

Read More

 

Summary

Zhu speaks about the nature of ontological crisis – a state of reality shift for either human or machine. He then transitions into the ontological anti-crisis, and how to use such a phenomenon to increase safety for artificial general intelligence.

Presentation: Ontological anti-crisis and AI safety

Transcript

 

The center for safe artificial general intelligence uses partnerships in China to investigate methodologies for developing AI safely.

 

The (extreme) change of model of the reality for an entity

 

 

 

Humans, machines, and posthumans can undergo ontological crisis.

Decision diagram – machine ontological crisis is likely easier to fix.

 

 

The ontological hierarchy is a product of how ontological progress is built into the human or machine condition.

Ontological completeness and incompleteness Theses for safe AGI

 

Ontological anti-crisis is a systematic way to design an ontological structure which takes change into account.  It can help humans and machines understand each other.

 

 

OAC may be able to help with AI safety

 

 

Iterations of the OAC theory may be used to improve completeness and create safer interactions with AI

  • Mission
  • Join
  • Contact
  • About Nanotechnology
    • Foresight Nanotechnology Roadmap
  • Do Not Sell My Information
Menu
  • Mission
  • Join
  • Contact
  • About Nanotechnology
    • Foresight Nanotechnology Roadmap
  • Do Not Sell My Information
Facebook Twitter Youtube Linkedin Podcast Discord Spotify

Search Foresigh Institute