Tianyi Alex Qiu
Tianyi builds solutions to AI alignment, with a focus on how it uplifts human truth-seeking and moral progress – what he believes to be the most important problem of our time. Research he led has received academic distinctions such as Best Paper Award (ACL’25) and Best Paper Award (NeurIPS’24 Pluralistic Alignment Workshop), with 20k+ downloads of associated open-source projects. He is an Anthropic AI Safety Fellow, based in London. He previous worked with the UC Berkeley Center for Human-Compatible AI, and have also been a member of the PKU Alignment Team. He mentors part-time for the Supervised Program for Alignment Research and the Algoverse AI Safety Fellowship.