Presenter
Lisa Thiergart, SERI MATS
Independent Researcher @ SERI MATS | CS @Georgia Tech | TUM | CDTM | UCL | previously Research Scientist @ Brainamics | Founder @PhilosophiaMunich
Summary:
This talk summary by Lisa Thiergart at the WBE Workshop 2023 discusses various perspectives on alignment in the field of AI research. Lisa highlights the significant advancements in large language models, but also raises concerns about the lack of adequate safeguards in current alignment strategy statements from leading AI Labs. She reduces her alignment timelines for the emergence of dangerous AI from 10 to 5 years based on technical considerations and alignment concerns.
Lisa also discusses potential threats from narrow forms of AI and whole brain emulation. While acknowledging the potential utility of whole brain emulation for alignment impact, she emphasizes the need to consider regulatory and moral bottlenecks that could slow alignment progress. Confidentiality of technical insights in whole brain emulation is highlighted as an important consideration.
The speaker shares insights from a workshop organized with alignment and neurotech specialists, exploring the contributions of neurotechnology to alignment. The workshop highlights trade-offs between investing in neurotech approaches like whole brain emulation or non-neurotech approaches, considering factors such as limited capital, high costs, uncertainty, and potential blockers.
The talk also touches on the challenges of distributing information about advanced technology. The speaker acknowledges the risks of widespread information leading to the downfall of society but believes that it is still worth exploring better ways to distribute information more cautiously and confidentially. They express hope for improvement in the future.