Yet More Thoughts on the Singularity Summit

There were talks by two of SIAI’s researchers, Eliezer Yudkowsky and Anna Salamon, on the general subject of producing a friendly AI as opposed to whatever the alternative is, presumably the Terminator scenario or the like. Eliezer did his usual thing on cognitive biases in humans, and Anna ended the conference with a very nice presentation of utility-based meta-decision theory — how much time should you spend thinking about what to think about? (Disclosure: I am partial to utility-based meta-decision theory having done a bit of work on it in the 90s in the context of internal computational resource allocation in AI systems.)

The thing that struck me about both of these talks was that the common thread was: humans tend to make dumb decisions.

It reminded me of a talk by Ron Arkin at AGI08 about robot ethics. He was discussing using (current-day, rule-based, narrow) AI to make ethical decisions in places like battlefields and military occupation operations. The key was that the actual human soldiers do so poorly that even a crummy error-prone AI could do better. He underlined his talk with a quip that has become my motto for human vs. AI issues of all kinds: “It’s a low bar.”

So I would claim that the SIAI researchers have, perhaps unintentionally, provided one of the best arguments for developing AI as fast as possible and putting it into use in the real world without delay: humans making these decisions are messing up big time. We don’t need superintelligence to do better, just human-level perception combined with rational decision-making — rational decision-making, I might add, that we already know how to do and believe and understand is the right way to do it, but just don’t bother to for most of our decisions. It’s a low bar.

I do have one quibble with Anna’s formulation of the problem in her talk; but before I mention it let me reiterate that I think her conclusion was absolutely correct: we should be spending a lot more effort on AI, and indeed on all the “Singularity” technologies, including nanotech. That said, I think she left out one of the key sources of utility in the back-of-the-envelope calculation: the utility of, and to, the possible future AIs themselves.

We value humans more than animals, animals more than plants, plants more than rocks, because we have an intuition that complexity, sentience, reflection, consciousness, understanding, and all of the other qualities that are correlated with increasing intelligence are in fact the constituents of our values. Now our current AI programs have very little or none of these things and are the cognitive equivalents of insects or at best reptiles. However, the AGIs of the not-too-distant future are going to be at least as intelligent, complex, sentient, and so forth as we are. It would be the height of egocentric selfishness to claim that they weren’t just as deserving or moral concern as we are. Indeed, if we have the moral courage, skill, and luck to build superintelligences, they would have the capacity to be more valuable than we are.

So the calculation must include not only the value (or danger) of the AIs to us, but the value to the AIs themselves. We could perhaps rephrase Kant’s Categorical Imperative (Second Maxim):

Act in such a way that you treat any sentient intelligence, whether in your own species or in any other form, always at the same time as an end and never merely as a means to an end.

Leave a comment

0
    0
    Your Cart
    Your cart is emptyReturn to Shop