Intelligence and the Chinese Room

Michael A. writes:

I support the consensus science on intelligence for the sake of promoting truth, but I also must admit that it especially concerns me that the modern denial of the reality of different intelligence levels will cause ethicists and the public to ignore the risks from human-equivalent artificial intelligence. After all, if all human beings are on the same general level of intelligence, plus or minus a few assorted strengths and weaknesses, then it becomes easy to deny that superintelligence is even theoretically possible.

Is superintelligence theoretically possible?  It depends, I think, on how you define it.  For example, it’s clear that you can be smarter in many measures just by cranking up the clock on your processor.  Human civilization uses a similar trick, more ops/second, not by increasing clock speeds, but by parallel processing — making more brains.

But mere speed aside, is it possible to be, well, smarter?  Our instinct in ordinary affairs says yes:  give two people a test and there are plenty of cases where A will pass it but B will never do so no matter how much time he is given.  But, under the assumption that AI is possible to implement as ordinary computer programs, there is a sense in which there is a level of intelligence that is equivalent in power to any other possible one — the intelligence necessary to emulate the AI program with paper and pencil.  That is the basis of Searle’s famous Chinese Room experiment.

But the Chinese Room is at the disadvantage of probably a quadrillion-to-one slowdown, so it isn’t about to compete with humans any time soon.  On the other hand, if we abstract away the speed issue, we are faced with a really interesting question (to my mind, at least):

Let’s suppose you’re a Chinese Room implementation of a recursively self-improving superintelligence.  Now without increasing the clock speed — which is just the guy flipping cards in the room, remember — is there any upper bound on your intelligence?  You’re allowed to have more cards and files but the guy won’t have any more time to access them.  But the simulated AI has a “codic cortex” and can do amazing feats of self-optimization.  Must it stop somewhere?

Let’s take IQ as including speed — what we ordinarily mean by intelligence — and simply divide by the operations per second to get a measure of the efficiency of the program in implementing intelligence. The units work out to be “intelligence seconds per operation,” which I’ll call ISPO.  So a tiny bit more formally, we can ask, is there an upper bound on ISPO?

The answer is obviously yes.  If not, if the amount of speedup to be had by improving the algorithm could increase without limit, so could the apparent speed of the Chinese Room.  The simulated intelligence could run in real time — or faster. And that’s clearly impossible.  Thus there is an upper bound on ISPO.  This should come as no surprise to computer scientists, since there are provably optimal algorithms for lots of problems, and our instinct is that there’s some optimal algorithm, whether known and provable or not, for any well-defined problem.

Leave a comment

0
    0
    Your Cart
    Your cart is emptyReturn to Shop