Resources / Recordings / Formally Scalable AI Oversight Through Specifications

Recording

Formally Scalable AI Oversight Through Specifications

With Evan Miyazono


Date

Growth in AI capabilities will exacerbate the human oversight and review process.  Guaranteed-Safe AI is a recently proposed architecture, in which AI systems produce intermediate, verifiable outputs.  In this talk, Evan Miyazono will discuss how Atlas Computing is using this architecture to develop narrow AI systems with quantitative safety guarantees.Before starting Atlas Computing, Evan Miyazono created and led a metascience team within Protocol Labs, as well as venture studio focused on building better tools for human coordination. He holds a PhD in Applied Physics from Caltech and a BS and MS from Stanford. I’d love to see a great UI/UX for developing formal specifications. One cool question: could we identify and label clusters in the accessible state space of a program, and expect with high (tunable?) confidence that the clustering could distinguish intended vs unintended behavior?

Fund the science of the future.

Donate today