After a presentation on “A List of some Good Reasons to be Skeptical that there is Any Possibility of AI Alignment” by Forrest Landry, we will open up for group discussion.
Whether you are working on AI, AI alignment, whether you are skeptical of the possibility of AGI or just of the possibility of aligning generally intelligent agents, please join us for this tricky but hopefully fun discussion.
Opening talk description:
A brief summary of social network bias, multi-polar traps, and the ‘principle agent problem’, as equivalence classes, all of which can considered in terms of the game theory of the inherent physics of carbon vs silicon environments and ecology. From here, we can make some remarks on complexity containment and market process, which provide a basis by which we can then consider the economic factors that result, and in turn, the implications that these will necessarily have with respect to any coherent theory of ethics. This leads to a restatement of the problem definition in terms of degrees of altruism, and hence, to a well-justified basis for comprehensive skepticism, with respect to the entire class of problems exampled by the notion of AI Alignment.
Our weekly online salons:
We meet weekly on Thursdays at 11 AM PT / 8 PM CEST to explore cutting edge topics & undercover science.
We leave ample time for discussion and socializing so you can meet the brilliant speakers and fellow participants in breakout rooms.
Feeling like leading an inspiring salon or taking us into a deep-dive with your presentation? Nothing would have us more excited: you can apply to be a speaker, using this form.
Add the salons to your calendar: https://bit.ly/foresightcalendar