Mind Children

Mind Children

As I pointed out yesterday, the internals of a super-AI are likely to look more or less like some organization of human-level AIs (which in turn are likely to be Societies of Mind of even simpler ones). So just drawing a box around it and calling it “weakly god-like” doesn’t help understand or design it.  What helps is understanding how to make good people and good societies of them.  But this is the problem that parents have always had.  I’ve pointed this out in my book, and recently David Brin points it out on his blog:

… there is nothing new about this endeavor.  That every human generation embarks upon a similar exercise — creating new entities that start out less intelligent and virtually helpless, but gradually transform into beings that are stronger, more capable, and sometimes more brilliant than their parents can imagine.

We acknowledge that individual human beings  — and also, presumably, the expected caste of neo-humans — are inherently flawed in their subjectively biased views of the world.

In other words…  we are all delusional! Even the very best of us.  Even (despite all their protestations to the contrary) all leaders.  And even (especially) those of you out there who believe that you have it all sussed.

This is crucial. Six thousand years of history show this to be the one towering fact of human nature.

See Plato’s “allegory of the cave,” or the sayings of Buddha, or any of a myriad other sage critiques of fallible human subjectivity.  These savants were correct to point at the core problem… only then, each of them claimed that it could be solved by following their exact prescription for Right Thinking. And followers bought in, reciting or following the incantations and flattering themselves that they had a path that freed them of error.

Painfully, at great cost, we have learned that there is no such prescription. Alack, the net sum of “wisdom” that those prophets all offered only wound up fostering even more delusion.  It turns out that nothing — no method or palliative applied by a single human mind, upon itself — will ever accomplish the objective.

…eventually, the Enlightenment offered a completely different way to deal with this perennial dilemma.  We (and presumably our neo-human creations) can be forced to notice, acknowledge, and sometimes even correct our favorite delusions, through one trick that lies at the heart of every Enlightenment innovation — the processes called Reciprocal Accountability (RA).

How can we ever feel safe, in a near future dominated by powerful artificial intelligences that far outstrip our own? What force or power could possibly keep such a being, or beings, accountable?

Um, by now, isn’t it obvious?

The most reassuring thing that could happen would be for us mere legacy/organic humans to peer upward and see a great diversity of mega minds, contending with each other, politely, and under civil rules, but vigorously nonetheless, holding each other to account and ensuring everything is above-board.

But I wasn’t the first to see things this way, nor was Brin. Nearly 20 years ago, Hans Moravec published a book called Mind Children. I can’t think of an earlier or more influential exposition of this way of thinking about the problem.

By the way, Michael Anissimov posted a reply to the Brin piece which begins

I think you’re being somewhat anthropomorphic by assuming that by extending a hand to AIs they’ll necessarily care. A huge space of possible intelligent beings might not have the motivational architecture to give a shit whatsoever even if they are invited to join a polity. The cognitive content underlying that susceptibility evolved over millions of years of evolution in social groups and is not simple or trivial at all. Without intense study and programming, it won’t exist in any AIs.

… which is certainly true. But it isn’t clear that it makes that much of a difference. We’re not depending on the gratefulness of AIs at being invited to join civilization, to be nice and like us. We’re depending on the fact that the advantages of being a part of civilization are so great that they will have no other rational choice.

By | 2017-06-01T14:20:17+00:00 August 27th, 2009|Machine Intelligence, Nanodot|2 Comments

About the Author:


  1. Michael Anissimov August 28, 2009 at 7:07 pm - Reply

    What if my rational choice is to kill humanity with neutron bombs and then do whatever I want with all the matter in the galaxy?

  2. Vire August 29, 2009 at 11:13 am - Reply

    A fair point Micheal, but assuming there are multiple AIs of at least equal capability, the risks of destruction would likely prevent any malicious intent from being carried out.

    It’s similar to M.A.D.

Leave A Comment