Visualizing the Cosmic All

In E.E. Smith’s famous Lensman series, the galaxy is the battleground between two races of superintelligent beings, the (good) Arisians and the (evil) Eddorians.  When I listen to people who worry that we are about to create a superintelligence which will take over the world, I get the impression they’ve come from reading “Galactic Patrol” and think that we are on the verge of disastrously creating an Eddorian unless we buckle down quick and figure out how to build a friendly Arisian instead.

In the books, the superintellects had lots of ESP powers but we can dismiss those.  The actual intellectual capability they were imputed to have was the ability to predict.  Prediction is of course the sine qua non of intelligence, but the Arisians were able to predict, e.g., five years ahead of time, that a certain man would be sitting in a barber’s chair and a kitten would jump onto his lap, jostling the barber’s arm and giving him a scratch.  All from the laws of physics and the knowledge of initial conditions.

There are many reasons why this is simply, completely, totally, always forever and truly impossible.

First of all the laws of physics are quantum and have a built-in probabalistic uncertainty. By the same token, it is impossible to know the initial conditions of any substantial portion of the universe to any very high precision: measuring a particle necessarily changes its state in a way do not completely know.

Second, huge parts of the phenomena of interest, a many levels of ontology, are in dynamic systems that are subject to chaotic behavior.  The Butterfly Effect reigns not only in weather, but in markets and politics and epidemiology and computers (one different bit out of a gigabyte can completely change the program’s behavior) and every human mind.

Computers are a particularly hard case of this.  Very basic theorems of computer science tell us that one cannot in general predict what a program will do without actually running it.  This is fine if your superintellect has plenty more processing power than the computer in question, and can emulate it.  But the closer the computer you’re trying to predict comes to having your own processing power, the more likely it will surprise you.

A weird special case of this is that you can’t even predict a universe if you yourself are part of it, because you are a computer with processing power equal to yourself.  (This, BTW, is where our notion of free will comes from: our world models must necessarily exempt our self-models from their general basis in determinism.) You could cheat and force yourself to act in the future according to a list of actions you prepared today, but you wouldn’t be acting all that intelligently; and you wouldn’t be acting with free will, either.

A more obvious case is simply a world with two (well-matched) superintellects, in which at least somewhere they are in competition, maybe even just a friendly game of chess.  In a game between two identical chess computers, each gets to see one ply deeper into the future than the other one did.  Neither can know enough to guess what the other one is going to do for sure.

In a world with lots of superintellects, no one will be able to predict any detail on which they compete.

Leave a comment

0
    0
    Your Cart
    Your cart is emptyReturn to Shop