Early retirement — how soon?

Early retirement — how soon?

In my Early Retirement post, I wrote

If you have a human-level AI based on computer technology, the cost to do what it can do will begin to decline at Moore’s Law rates. Even if an AI costs a million dollars in, say, 2020, it’ll be a thousand in 2030 and one dollar in 2040 (give or take a decade).

A couple of analyses of this trend have just been moved onto Scientific American’s free robotics site. Hans Moravec writes:

By 2010 we will see mobile robots as big as people but with cognitive abilities similar in many respects to those of a lizard. The machines will be capable of carrying out simple chores, such as vacuuming, dusting, delivering packages and taking out the garbage. By 2040, I believe, we will finally achieve the original goal of robotics and a thematic mainstay of science fiction: a freely moving machine with the intellectual capabilities of a human being.

and Ray Kurzweil writes:

By around 2020 a $1,000 computer will at least match the processing power of the human brain. By 2029 the software for intelligence will have been largely mastered, and the average personal computer will be equivalent to 1,000 brains.

So, are we looking at 2020 or 2040?

In historical terms, it doesn’t really make much difference. To us as individuals, perhaps a bit more. Will “real” AI be constrained by the processing power available, and slowly come into being as allowed by Moore’s Law? Moravec imagines the course of robotics over the next decades recapitulating the evolution of humans, in similar stages. Or will the discovery of the “secret sauce” of AI burst upon a world where the processing power to run a human is cheap and plentiful, collapsing a catastrophe theory-like overhang?

Let’s start with a few numbers. Moravec estimates

From long experience working on robot vision systems, I know that [one pixel’s worth of] edge or motion detection, if performed by efficient software, requires the execution of at least 100 computer instructions. Therefore, to accomplish the retina’s 10 million detections per second would necessitate at least 1,000 MIPS.

The entire human brain is about 75,000 times heavier than the 0.02 gram of processing circuitry in the retina, which implies that it would take, in round numbers, 100 million MIPS (100 trillion instructions per second) to emulate the 1,500-gram human brain. Personal computers in 2008 are just about a match for the 0.1-gram brain of a guppy, but a typical PC would have to be at least 10,000 times more powerful to perform like a human brain.

There are several things we could quibble at in this estimate — the retina is probably highly optimized for its computational function, it doesn’t learn, it doesn’t consist primarily of bundles of wiring as does the brain — but I feel the estimate is a reasonable upper bound for the computation needed to do a brain’s worth of work at a functional level. On the other hand, if we’re worried about overhangs we need to think about lower bounds. Looking at my quibbles, I feel relatively comfortable knocking off a factor of ten and guessing that human-level thought requires somewhere between 10 and 100 teraops.

Today, the cheapest way to get raw processing power is with GPUs like the NVIDIA TESLA. Installed in a system with enough memory, disk, communications, etc, to use them effectively, these are on the order of $2500/teraop. So the hardware cost of a brain-level machine, today, is between $25K and $250K — i.e. in the range of cars to houses.

There is a caveat when comparing these numbers with Moravec’s. He’s interested in mobile robots, and there is the issue of running the machine with batteries and lugging it around with you. Humans face the same problem, of course — our brains are metabolically expensive and physiologically awkward. But in robots, we’ll probably be able to finesse the issue with wireless links to stationary computers, and there are plenty of uses for the latter in, e.g. office work.

I feel comfortable in going with my lower number ($25K today projected by Moore’s Law) in the long run. The reason is that once we know what the algorithms for AI really are, the hardware will change to match — just as the GPUs match the hardware to the needs of graphics processing. On the other hand, I don’t see it going a whole lot lower than that. Our brains are, as mentioned, metabolically expensive, as well as representing significant vulnerabilities in cases like childbirth and falling down stairs. There are significant evolutionary pressures against big brains, and we wouldn’t have them if there weren’t some similarly large benefits to the marginal MIPS.

What this means is that if the hardware for AI isn’t available now at an affordable price, it very likely will be by 2020. Not for a million dollars but a few hundred, or at most a few thousand. So Kurzweil’s projected schedule seems more likely right as far as the hardware is concerned.

How about the software, then? I think Kurzweil’s estimate of 2029 is probably a good upper bound for that. Much is being learned in fields ranging from cognitive psychology and neuroscience, such that two more decades of progress are likely to bring us to the point of building at least a cheap imitation of the brain.

But I think that the software will come earlier, just judging from the current progress in AI and related fields. I think the state of the art is such that we could program a machine to do most of the things that a human can do, given an appropriately-sized development effort. Unfortunately, AI has to some extent been focused on doing just this for some time, as opposed to trying to build a machine that can learn to do the task by itself. That’s a much harder problem, of course, but one that people are finally beginning to take seriously.

It’s this much harder problem of learning that I believe requires the extravagant computing resources of the human brain. And not just any learning: learning to create and properly use new concepts. Consider a squirrel. Squirrels are very adept — we don’t have a robot close to that level of fluidity in the physical world. They learn pretty well, too, within their existing conceptual framework (e.g. overcoming physical obstacles to get to food). They have brains that are quite a bit smaller than ours — but they don’t develop new concepts.

So expect to see two kinds of AIs in the 20s. Narrow ones, which are good at particular kinds of things like maids and chauffeurs (and probably fairly broad ranges of things — once learned, copying is cheap) running on very cheap hardware, and ones that learn and innovate and think out of the box, which will be somewhat more expensive. Economics says that the former kind will be the great majority, at least in early days. But the cheaper computation gets, the more the value of the marginal MIPS will overtake its cost, so that by say 2030 we should expect most AIs to be more intelligent than most people — and a lot cheaper.

By | 2017-06-01T14:06:26+00:00 March 26th, 2009|Economics, Machine Intelligence, Nanodot, Robotics|6 Comments

About the Author:


  1. Anonymous March 26, 2009 at 2:49 am - Reply

    How much will a AI be ->worthcosts

  2. Anonymous March 26, 2009 at 2:49 am - Reply


  3. Anonymous March 26, 2009 at 10:39 pm - Reply

    I really think that there is too much emphasis on the emulation and not enough on results. We could augment AI with crowd sourcing. Narrow AI, encounters a problem, determines if it can solve it, if yes no problem, if no, then it appeals to a crowd or mechanical turk or the like.

    There’s also the possiblity that BMI/BCI moves faster than AI and we augment our computers/robots with actual synthetic brains.

  4. Anonymous March 26, 2009 at 11:47 pm - Reply

    Tech aside, I live in a 100 year old building. Just because we can do something doesn’t mean we will… at least not right away. There’s sunk costs to ponder. There’s recoupment time, and testing and cultural acceptance that needs to be considered. Out sourcing has slowed because its politically unpopular- that being said we certainly have the ability to outsource far, far more jobs than we have.

    If AI hits in 2020, will it be prevalent by 2040? Maybe. If it hits 2035, I doubt it’ll be big by 2040. There’s also political questions to consider. I was never clear as to whether or not this was true, but remember the rumors that you couldn’t bring a G4 Mac to China because it was technically a super computer? Will other restrictions be placed on AI programs? Probably. Will they be fought tooth and nail by trade unions and worker’s advocates? Probably. Will the religious and socially conservative groups try to ban them from all walks of life… I’d wager the chances are better than average.

    Trying to determine when we’ll be able to achieve human level intelligence is complex enough, adding to it when we will have the political will (and economic stability) is another matter entirely. I’m openly skeptical about James Albus’s economic ideas (he’s not an economist) and a little worried about aspects of the picture Robin Hanson paints…

    I think a molecular assembler will be a prerequisite for the singularity, not a by product. Maybe I’m projecting my love of science fiction here, but when I talk about molecular assemblers, I mean the machine that can make anything, and I’m including in this food. If food and shelter become so cheap that employment is only a means for acquiring luxuries than sure, a massive rollout of robots and an intelligence explosion will be fine and not create vast amounts of social upheaval. But if it’s the other way around, given how Americans loath Free riders and anything that could be spun to have the faintest whiff of communism to it, don’t hold your breath on AI having an easy path to market. It would fundamentally change the political and economic landscape too much too quickly for the tastes of the average powerbrokers.

    As a group we spend too much time congratulating ourselves for having accomplished steps 1-2 and wondering when step 10 will arrive. We should take a harder look at steps 3-9, not all of which are technical in nature.


  5. JamesG March 30, 2009 at 2:56 am - Reply

    Communism and free riders are not liked in America, because someone else has to pay for them. With AI and nanorobotics, nobody will have to make up the slack as people retire early. Big difference. And I don’t think politics or people’s prejudice against advanced technologies is going to make any difference, it didn’t make any difference with the TV, or the internet or social websites, and AI and nanorobotics will be far more desirable.

    Anyway, I don’t think it is going to take that long, supercomputers with AI should be here with 2 or 3 years, people are going to be blown away by these and science will become a majorly funded field then, imo.

  6. Anonymous March 31, 2009 at 9:42 am - Reply

    I’m more interested in A.I. applications to daily life. For example, does the U.S. Constitution prohibit an A.I. from running for Congress or the Senate? Dreaming of an A.I. run government with the corruption software turned off and programmed to only follow the Constitution. Maybe it would be simpler to have A.I. be judges on the Supreme Court. I’m assuming A.I. wouldn’t be able to ignore reality and claim the Constitution is a living and breathing document.

Leave A Comment