Aligned AI - My research at the Future of Humanity Institute centres on the safety and possibilities of Artificial Intelligence (AI), how to define the potential goals of AI and map humanity’s partially defined values into it, and the long term potential for intelligent life across the reachable universe. I’ve been working with people at the FHI and other organisations, such as DeepMind, to formalise AI desiderata in general models, so that AI designers can include these safety methods in their designs. My past research interests include comparing existential risks in general, including their probability and their interactions, anthropic probability (how the fact that we exist affects our probability estimates around that key fact), decision theories that are stable under self-reflection and anthropic considerations, negotiation theory and how to deal with uncertainty about your own preferences, computational biochemistry, fast ligand screening, parabolic geometry, and my Oxford D. Phil. was on the holonomy of projective and conformal Cartan geometries.