My research concerns safety in multi-agent systems and in practice spans game theory, machine learning, and verification. One aspect of this that I spend a lot of time thinking about is how to make AI systems more cooperative, which is the mission of the Cooperative AI Foundation, where I work. At Oxford I am supervised by Michael Wooldridge, Alessandro Abate, and Julian Gutierrez, and am a DPhil Affiliate at the Future of Humanity Institute. I am also affilated with both the Centre for the Governance of AI and the Foresight Institute. Before coming to Oxford I worked as an intern on Imandra and was a research assistant for Jacques Fleuriot at the University of Edinburgh, where I completed my MSc in artificial intelligence under the supervision of Vaishak Belle. Prior to this I studied for my BSc in mathematics and philosophy at the University of Warwick with Walter Dean.
Lewis Hammond explores delegation games in AI safety, where humans delegate tasks to AI systems in a context of multi-agent problems. He sees both control problems involving aligning preferences and capabilities, and cooperation problems which are aiming for high joint welfare. He emphasizes cooperation problems, as AI systems will likely interact more frequently with one another. On the other hand, he discusses measuring cooperative capabilities using concepts like the price of anarchy and equilibrium selection. He recognizes that understanding and measuring cooperative capabilities will help manage dynamics among multiple AI systems. Given that preventing collusion between machine learning agents is a challenge, he sees the importance of exploring detection and mechanism design to ensure ethical behavior and trust in AI systems.