from the smart-allies-not-enemies dept.
From Senior Associate Eliezer Yudkowsky: The Singularity Institute has just announced that it has begun circulating a preliminary version of "Friendly AI" for open commentary by the academic and futurist communities. "Friendly AI" is the first specific proposal for a set of design features and cognitive architectures to produce a benevolent – "Friendly" – Artificial Intelligence. The official launch is tentatively scheduled for mid-June, but we hope to discuss the current paper with you at the upcoming Foresight Gathering this weekend. Read More for more details. With the publication of "Friendly AI", the topic has moved for the first time beyond the realm of pure speculation, into the realm of technical speculation. There is now a specific description of how a Friendly AI would operate, behave, and make choices. In addition to increasing the safety of AI in the long run, we hope that "Friendly AI" will raise the level of discussion in the immediate debate about the long-term impact of very high technologies. The Friendship architecture described in "Friendly AI" would not be disrupted either by the ability of the AI to modify its own source code (including goal-system source code), or by the AI moving beyond the point of dependence on humans, two possibilities that are often treated as synonymous with disaster in the current literature.
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.