Presenter

Aran Nayebi, CMU Machine Learning Dept
Aran Nayebi is an Assistant Professor at Carnegie Mellon Universityās Machine Learning Department, a member of the Neuroscience & Robotics Institutes. His lab works at the intersection of neuroscience & AI to reverse-engineer animal intelligence and build the next generation of autonomous agents. Previously, he was a postdoctoral fellow at MIT, and before that, a Ph.D. student at Stanford University with Dan Yamins and Surya Ganguli.
Abstract:
Under what conditions can capable AI systems efficiently align with human preferences, and when is this alignment computationally feasible? Since such generally capable systems do not yet exist, a theoretical analysis is needed to establish when guarantees hold — and what they even are. We provide the first complexity-theoretic analysis of the alignment problem, introducing a game-theoretic framework that generalizes prior alignment approaches under minimal assumptions, providing both upper and lower bounds on alignmentās complexity across M objectives and N agents. We show that even very capable, cooperative AI agentsāincluding those enhanced by brain-computer interfacesāface inherent bottlenecks when the task space or number of agents grows large. Nevertheless, we identify key conditions under which efficient alignment remains possible, clarifying what makes an AI agent āsufficiently safeā and valuable to humans.
Full paper: https://arxiv.org/abs/2502.05934