Early retirement — how soon?

In my Early Retirement post, I wrote

If you have a human-level AI based on computer technology, the cost to do what it can do will begin to decline at Moore’s Law rates. Even if an AI costs a million dollars in, say, 2020, it’ll be a thousand in 2030 and one dollar in 2040 (give or take a decade).

A couple of analyses of this trend have just been moved onto Scientific American’s free robotics site. Hans Moravec writes:

By 2010 we will see mobile robots as big as people but with cognitive abilities similar in many respects to those of a lizard. The machines will be capable of carrying out simple chores, such as vacuuming, dusting, delivering packages and taking out the garbage. By 2040, I believe, we will finally achieve the original goal of robotics and a thematic mainstay of science fiction: a freely moving machine with the intellectual capabilities of a human being.

and Ray Kurzweil writes:

By around 2020 a $1,000 computer will at least match the processing power of the human brain. By 2029 the software for intelligence will have been largely mastered, and the average personal computer will be equivalent to 1,000 brains.

So, are we looking at 2020 or 2040?

In historical terms, it doesn’t really make much difference. To us as individuals, perhaps a bit more. Will “real” AI be constrained by the processing power available, and slowly come into being as allowed by Moore’s Law? Moravec imagines the course of robotics over the next decades recapitulating the evolution of humans, in similar stages. Or will the discovery of the “secret sauce” of AI burst upon a world where the processing power to run a human is cheap and plentiful, collapsing a catastrophe theory-like overhang?

Let’s start with a few numbers. Moravec estimates

From long experience working on robot vision systems, I know that [one pixel’s worth of] edge or motion detection, if performed by efficient software, requires the execution of at least 100 computer instructions. Therefore, to accomplish the retina’s 10 million detections per second would necessitate at least 1,000 MIPS.

The entire human brain is about 75,000 times heavier than the 0.02 gram of processing circuitry in the retina, which implies that it would take, in round numbers, 100 million MIPS (100 trillion instructions per second) to emulate the 1,500-gram human brain. Personal computers in 2008 are just about a match for the 0.1-gram brain of a guppy, but a typical PC would have to be at least 10,000 times more powerful to perform like a human brain.

There are several things we could quibble at in this estimate — the retina is probably highly optimized for its computational function, it doesn’t learn, it doesn’t consist primarily of bundles of wiring as does the brain — but I feel the estimate is a reasonable upper bound for the computation needed to do a brain’s worth of work at a functional level. On the other hand, if we’re worried about overhangs we need to think about lower bounds. Looking at my quibbles, I feel relatively comfortable knocking off a factor of ten and guessing that human-level thought requires somewhere between 10 and 100 teraops.

Today, the cheapest way to get raw processing power is with GPUs like the NVIDIA TESLA. Installed in a system with enough memory, disk, communications, etc, to use them effectively, these are on the order of $2500/teraop. So the hardware cost of a brain-level machine, today, is between $25K and $250K — i.e. in the range of cars to houses.

There is a caveat when comparing these numbers with Moravec’s. He’s interested in mobile robots, and there is the issue of running the machine with batteries and lugging it around with you. Humans face the same problem, of course — our brains are metabolically expensive and physiologically awkward. But in robots, we’ll probably be able to finesse the issue with wireless links to stationary computers, and there are plenty of uses for the latter in, e.g. office work.

I feel comfortable in going with my lower number ($25K today projected by Moore’s Law) in the long run. The reason is that once we know what the algorithms for AI really are, the hardware will change to match — just as the GPUs match the hardware to the needs of graphics processing. On the other hand, I don’t see it going a whole lot lower than that. Our brains are, as mentioned, metabolically expensive, as well as representing significant vulnerabilities in cases like childbirth and falling down stairs. There are significant evolutionary pressures against big brains, and we wouldn’t have them if there weren’t some similarly large benefits to the marginal MIPS.

What this means is that if the hardware for AI isn’t available now at an affordable price, it very likely will be by 2020. Not for a million dollars but a few hundred, or at most a few thousand. So Kurzweil’s projected schedule seems more likely right as far as the hardware is concerned.

How about the software, then? I think Kurzweil’s estimate of 2029 is probably a good upper bound for that. Much is being learned in fields ranging from cognitive psychology and neuroscience, such that two more decades of progress are likely to bring us to the point of building at least a cheap imitation of the brain.

But I think that the software will come earlier, just judging from the current progress in AI and related fields. I think the state of the art is such that we could program a machine to do most of the things that a human can do, given an appropriately-sized development effort. Unfortunately, AI has to some extent been focused on doing just this for some time, as opposed to trying to build a machine that can learn to do the task by itself. That’s a much harder problem, of course, but one that people are finally beginning to take seriously.

It’s this much harder problem of learning that I believe requires the extravagant computing resources of the human brain. And not just any learning: learning to create and properly use new concepts. Consider a squirrel. Squirrels are very adept — we don’t have a robot close to that level of fluidity in the physical world. They learn pretty well, too, within their existing conceptual framework (e.g. overcoming physical obstacles to get to food). They have brains that are quite a bit smaller than ours — but they don’t develop new concepts.

So expect to see two kinds of AIs in the 20s. Narrow ones, which are good at particular kinds of things like maids and chauffeurs (and probably fairly broad ranges of things — once learned, copying is cheap) running on very cheap hardware, and ones that learn and innovate and think out of the box, which will be somewhat more expensive. Economics says that the former kind will be the great majority, at least in early days. But the cheaper computation gets, the more the value of the marginal MIPS will overtake its cost, so that by say 2030 we should expect most AIs to be more intelligent than most people — and a lot cheaper.

Leave a comment

0
    0
    Your Cart
    Your cart is emptyReturn to Shop