Yet More Thoughts on the Singularity Summit

Yet More Thoughts on the Singularity Summit

There were talks by two of SIAI’s researchers, Eliezer Yudkowsky and Anna Salamon, on the general subject of producing a friendly AI as opposed to whatever the alternative is, presumably the Terminator scenario or the like. Eliezer did his usual thing on cognitive biases in humans, and Anna ended the conference with a very nice presentation of utility-based meta-decision theory — how much time should you spend thinking about what to think about? (Disclosure: I am partial to utility-based meta-decision theory having done a bit of work on it in the 90s in the context of internal computational resource allocation in AI systems.)

The thing that struck me about both of these talks was that the common thread was: humans tend to make dumb decisions.

It reminded me of a talk by Ron Arkin at AGI08 about robot ethics. He was discussing using (current-day, rule-based, narrow) AI to make ethical decisions in places like battlefields and military occupation operations. The key was that the actual human soldiers do so poorly that even a crummy error-prone AI could do better. He underlined his talk with a quip that has become my motto for human vs. AI issues of all kinds: “It’s a low bar.”

So I would claim that the SIAI researchers have, perhaps unintentionally, provided one of the best arguments for developing AI as fast as possible and putting it into use in the real world without delay: humans making these decisions are messing up big time. We don’t need superintelligence to do better, just human-level perception combined with rational decision-making — rational decision-making, I might add, that we already know how to do and believe and understand is the right way to do it, but just don’t bother to for most of our decisions. It’s a low bar.

I do have one quibble with Anna’s formulation of the problem in her talk; but before I mention it let me reiterate that I think her conclusion was absolutely correct: we should be spending a lot more effort on AI, and indeed on all the “Singularity” technologies, including nanotech. That said, I think she left out one of the key sources of utility in the back-of-the-envelope calculation: the utility of, and to, the possible future AIs themselves.

We value humans more than animals, animals more than plants, plants more than rocks, because we have an intuition that complexity, sentience, reflection, consciousness, understanding, and all of the other qualities that are correlated with increasing intelligence are in fact the constituents of our values. Now our current AI programs have very little or none of these things and are the cognitive equivalents of insects or at best reptiles. However, the AGIs of the not-too-distant future are going to be at least as intelligent, complex, sentient, and so forth as we are. It would be the height of egocentric selfishness to claim that they weren’t just as deserving or moral concern as we are. Indeed, if we have the moral courage, skill, and luck to build superintelligences, they would have the capacity to be more valuable than we are.

So the calculation must include not only the value (or danger) of the AIs to us, but the value to the AIs themselves. We could perhaps rephrase Kant’s Categorical Imperative (Second Maxim):

Act in such a way that you treat any sentient intelligence, whether in your own species or in any other form, always at the same time as an end and never merely as a means to an end.

By | 2017-06-01T14:05:21+00:00 October 7th, 2009|Machine Intelligence, Nanodot|3 Comments

About the Author:


  1. […] can read his thoughts on the Singularity Summit here, here, and here. […]

  2. Mikkel Kjær jensen October 7, 2009 at 2:39 pm - Reply

    This actually raises an aspect of the nanotechnology/”Singularity” technologies debate that I have found lacking:

    Which groups are out there, being part of the political process, trying to convey the issues (both positive and negative) to the politicians that these technologies might bring, and attempt to get funding from the public? Which kinds of activities are they doing? Is it working? And if not, why not, and what might be done to correct this?

    I am sure that Foresight is doing its part, but I cannot seem to recall a any post here, that has covered any such activities, as opposed to, well-founded, speculation and reporting on technology advancement.

    The overall impression I am left with is that there is not a lot of outreach, with would seem rather strange.

  3. flashgordon October 7, 2009 at 11:19 pm - Reply

    If hitler had a more efficient thinking of strategy, he’d have conquered the world twice over(with rockets and jet engine fighter planes at least); on the other hand, if the people’s of the world had thought things through better, Hitler would have been stopped long before he ever entered poland or destroyed Gottingen.

    Clearly, intelligence is valuable to both evil or good people(depending on what your definition of good and evil are); so, I don’t see how giving higher intelligence to stupid and evil people is going to help the world!

    Just seems to me that intelligence(and evil and goodness for that matter) are vague concepts; these are phlygistic concepts; untill you have a deductive proof of all those concepts plus ‘intuition’ ‘feelings’ and so on, your talking like the earth is flat.

    The bottom line for the nanofuture is to live with nanotech and I suppose at least supped up computers(even Kurzweil’s reverse engineering of the brain is not understanding of how the human mind works) and quantum computers. The bottom line is to keep life and really intelligence going. I would think that’s the bottom line; but, if you don’t like the idea of human intelligence becoming superintelligent, I guess you could call the world to stop development! Honestly, the rest of this train of thought is going to make this post so much longer; and really, I’ve already posted solutions to all this at crntalk.

Leave A Comment