Presenter
Stuart Armstrong
Prior to co-founding Aligned AI, Dr. Stuart Armstrongspent a decade at the Future of Humanity Institute at Oxford University doing deep analysis on the biggest threats confronting humanity including nuclear threats, pandemics, human extinction, space colonisation, and - above all else - AI. Focusing on the power and risk of AI long before others were aware of these issues, Stuart deepened our understanding of AI alignment and developed many novel methods of AI control, publishing extensively and accumulating thousands of citations. His theoretical work reached a stage where it became practical - and, he felt, imperative - to implement it. In 2022 he co-founded Aligned AI with the aim of ensuring that AIs remained safe and under human control. Continuing his trend from academia as a mathematician, he co-developed, with Rebecca Gorman, several advanced methods for AI alignment – ACE, EquitAI, ClassifAI – which have become IP, patents, and products for the company. Developing these successful methods has confirmed his core belief: that the challenge of getting AIs to follow human values not only must be solved, but can be solved, and will be solved. He is also the author of Smarter Than Us, a mentor for the Foresight Institute, an Advisor of the AI Safety Camp, and has appeared in several documentaries ("Alien Worlds", "The Future of Life and Death", "Odyssey") and interviews on AI and the future of space exploration.
Summary:
Current LLMs and other generative models display most impressive abilities along with some astounding examples of stupidity and inability to generalise. This talk will look into why this is, and whether we can expect these models to improve in the near future.