AI Philosophy: Why It’s Hard & State Of The Art
March 6, 2018 @ 7:00 pm - 10:00 pm PST
Allison Duetmann of the Foresight Institute will be previewing her SXSW talk for us at the Bellevue club on the evening of March 6th 2018 at 7pm. Please join us to hear about state of the art AI Philosophy from a brilliant young researcher.
Parking will be available, buzz the intercom for access and let the receptionist know you are there for the Futurist Meetup.
The talk will condense the basic lessons of a 2-hour long workshop Allison will be holding at SXSW on March 13 (http://2018.do512.com/events/2018/3/13/ai-safety-why-it-s-hard-updating-state-of-the-art-official) and is divided into three parts:
1. Why AI Safety is hard: Breakdown of different problems in AI safety into four categories and discuss the most important controversies in each: Ethical content, Technical Alignment, Computer Security, Social Coordination
2. State-of-the-art: I will discuss the most promising strategies to tackle each problem domain, including their shortcomings
3. Alternatives: Given the shortcomings of current approaches, I will propose an approach to Artificial General Intelligence, that relies on a decentralized view of superintelligences, based on a paper, co-authored by Mark Miller, Christine Peterson, and I for the 2017 UCLA Risk Colloquium (published by Google Research herehttps://research.google.com/pubs/pub46290.html)
While this session focuses on a positive long-term future, most ideas should be relevant for current AI, blockchain, and computer security strategies.[embedyt] https://www.youtube.com/watch?v=Lg1FAtfSheo[/embedyt]