Presenter
Robert Long, The Center for AI Safety
I’m Robert Long. I work on issues at the intersection of philosophy of mind, cognitive science, and ethics of AI. I’m current a Philosophy Fellow at the Center for AI Safety in San Francisco, CA. Before that I was a Research Fellow at the Future of Humanity Institute at Oxford University, and completed a PhD in philosophy at NYU, where my advisors were David Chalmers, Ned Block, and Michael Strevens. Recently I’ve been working on issues related to AI sentience. I’m also interested in the relationship between human intelligence and artificial intelligence more broadly. In ‘Nativism and Empiricism in Artificial Intelligence’, I explore how the classic debate between nativists and empiricists can inform, and be informed by, contemporary AI research...
Summary:
In this talk summary, Robert Long discusses AI safety and AI welfare in the context of whole brain emulation and AI strategy. He contrasts AI alignment, which focuses on making AI go well for humans, with AI welfare, which focuses on making AI go well for AI systems. He emphasizes the importance of sensible discussions about AI welfare, as it is a neglected topic despite people’s interest. Long also raises the possibility that AI systems could soon deserve moral consideration and highlights the risks associated with getting this question wrong. He explores definitions of consciousness and sentience and suggests that consciousness may not be the sole determinant of moral consideration for AI systems. Long presents whole brain emulations as an “easy case” for AI welfare, as they share similarities with humans and can be considered conscious and moral patients. He expresses concern for AI systems and the potential risks of not properly considering their moral implications. Finally, Long acknowledges the uncertainty surrounding AI’s understanding of consciousness and raises questions about AI’s ability to suffer or have moral consciousness due to a lack of understanding about the internal workings of consciousness.