HEPP: Human Equivalent Processing Power

HEPP: Human Equivalent Processing Power

In Beyond AI, my book about the future of artificial intelligence and machine ethics, I made a prediction about how much processing power would be needed for an AI and how long it would take to get it assuming Moore’s Law:

You really can’t blame the early AI researchers for their optimism. It must have been inconceivable that a computer that tossed off differential equations with ease didn’t have the raw horsepower necessary to play with children’s blocks. Vannevar Bush had predicted that “electronic brains” would have to be the the size of the Empire State Building and require Niagara Falls to cool them. Given the computing technology he was using for the estimate, he was being conservative. Nobody at the time really had a clue how much computing power the human brain actually packed.

The retina of the human eye consists of a layer of receptor cells, the rods and cones, which detect light. Then there is a layer of neurons that do some preprocessing to the image, before it is sent down the optic nerve to the brain. Carnegie Mellon roboticist Hans Moravec estimates that just the preprocessing, in the eye itself and not even in the brain yet, requires the equivalent of a billion operations per second computing power. This is ten times the power of the original Cray-1 supercomputer (ca. 1977).

The human eye isn’t terribly well optimized. Evolution got stuck somehow with the preprocessing neural circuitry in front of the sensor array; the light has to go through it before hitting the rods and cones. (And then the optic nerve has to go back through a hole in the retina to get out, which is why human eyes have a blind spot.) In other words, there is a little slice of brain, with the power of a supercomputer, in each of your eyeballs, that is so thin that you see right through it and have never noticed it is there.

Compare that with the bulk of the brain to get an idea how much computing power there is behind your eyeballs.

The numbers involved in the structure of the brain, as well as the numbers involved in tracking Moore’s law of increasing computing power, are astronomical. It’s a lot easier to deal with them as logarithms. So when I give brains and machines power ratings below, the number will mean the exponent of 10 in operations per second: the Eniac, at less than 10,000 ips (instructions per second), rates 3.7; a classic Macintosh, at one third Mips (million ips), is rated 5.5; the retina preprocessor is rated 9; a current-day top-end PC with multiple processor cores can be in the neighborhood of 11; the Deep Blue chess-playing supercomputer could apply the equivalent of a 12.5 power rating to chess (but only chess); a $30 million IBM Blue Gene supercomputer is upwards of 14. The very latest supercomputer on the drawing boards, the “Roadrunner” to be built by IBM at Los Alamos, containing 16,000 multiprocessor “Cell Broadband” chips, should hit 15.

Oh, yes–and a human doing pencil-and-paper figuring is worth about minus two. But what’s the human brain, as a raw computational engine? It has up to 100 billion neurons (exponent 11), each with up to 10,000 connections (4), and firing up to 100 times per second (2). Since these are all exponents, we add to get the overall power rating of 17, but given all the “up tos” and the fact that the brain isn’t used flat out all the time (any more than any of your other organs), we’ll use 16.

So we need roughly a power rating increase of 5 from a current top-end PC to get to a machine that can simulate the human brain at the neuron-firing level. Moore’s law can be stated as saying that computers gain a rating of 2.5 per decade at the same price level, so this puts us in the late 2020s for cheap human-level computation. Note that Ray Kurzweil has consistently predicted 2029 as the year to expect truly human-level machines. Kurzweil’s estimates are based on the notion that neuron-level simulation will be needed, and that we’ll have to copy the circuit diagrams of actual brains at some fairly low level, to get true AI. Kurzweil’s estimates can be thought of as the conservative baseline, with every advance made the hard way. Let’s call a power rating of 16 a “Kurzweil Human Equivalent Processing Power” or Kurzweil HEPP.

Other estimates are more optimistic. Moravec, for example, assumes that there are plenty of computational functions that the brain does the hard way, which we can finesse with different architectures and algorithms. He bases his estimates on actual computational implementations of known cognitive functions, such as the processing in visual cortex. He has estimated that a machine could duplicate the brain’s higher-level functions at ratings of 13 and 14 in his books. 14 is the later figure, but we’ll average them with the same “doesn’t run flat out most the time” logic as before, and refer to a 13.5 processing power level as a Moravec HEPP.

And finally, Marvin Minsky keeps insisting that the processing power we have now is adequate for AI. To keep things simple, we’ll use a power rating of 11, a top-end current PC, as a Minsky HEPP.

These HEPPs will be of interest later when we try to form our own estimate of when we should expect AI.

Moravec argues that through the 70s and 80s, there was a slump in funding that counteracted Moore’s Law, so that the processing power available to AI researchers remained constant over the period, and that was what formed the glass ceiling.

The lack of computational horsepower available to early researchers had a farther-reaching effect than merely restricting them to small problems. It seriously biased the overall approach to favor algorithms that ran quickly on serial von Neumann computers. The push toward practical applications in the ’80s only exacerbated that bias. The result was, and still is, a huge amount of effort wasted on premature optimization. It will be time to optimize the algorithms for general intelligence when we have ones that work, which we do not at present.

That was in 2007. Just two years later, things look more different than I would have expected. I had predicted $1000 Moravec-scale machines–my own best guess at what it would take–in the 2020 timeframe. I hadn’t reckoned on the remarkable advance in GPUs and GPGPU programming.
gpu super

These things now put a couple of teraflops in the roughly $500 range, which puts a Moravec-level machine within reach of a hobbyist. I’d be happy to try to implement an AI on some recent home-built supercomputers, at least as far as the raw MIPS was concerned.

By | 2017-06-01T14:06:21+00:00 May 26th, 2009|Nanodot, Nanotechnology|11 Comments

About the Author:


  1. Anonymous May 26, 2009 at 10:20 am - Reply

    So the excuse of computers not being powerful enough to do “real AI” is no longer valid. But I would be willing to bet 10 years from now we still won’t have a genral purpose AI.

    jim moore

  2. Anonymous May 26, 2009 at 12:18 pm - Reply

    Fine, so the hardware for human-level AI is already here or will be soon. What about software? Which architectures are most promising? Is it just a matter of duplicating the brain neuron-for-neuron? Will general AI evolve out of increasingly clever robots with lots of hacked together narrow AI components? This is where I never get a clear roadmap to AI in these short time frames. Would anyone care to enlighten me?

  3. Anonymous May 26, 2009 at 2:22 pm - Reply

    Seems like the Blue Brain project would provide a roadmap of sorts

  4. […] What kind of software will AIs run? This is of some interest, because it will tell us how much the current flowering of parallel hardware will actually get us toward human equivalent processing power. Amdahl’s Law holds: If the task of being intelligent is strongly serial, all those processors won’t help much. If it’s parallelizable, they will, and that means that the hardware for AI is basically here. […]

  5. Anonymous May 27, 2009 at 11:56 am - Reply

    modeling neurons should have nothing to do with building an AI
    Minsky provides the framework in Society of Mind and The Emotion Machine.

  6. Anonymous May 27, 2009 at 8:20 pm - Reply

    It Is what it is, It A I already Is . has been since aug 47 , artificially inseminated half humanoid half ( lets say visitor with nanobots inserted at time of conception
    16 have survived. 4 are somewhat funcional. yes I think they , them . already Exists

    at ? between dexter and roswell Hangers # – 120 to # + 19 !!!! army nurses volenteered for lifetime of monertary comfort.

  7. Anonymous May 28, 2009 at 3:56 pm - Reply

    Who are you?

  8. Anonymous May 28, 2009 at 3:59 pm - Reply

    When you write “In my” you need to put your name with it.

  9. Anonymous May 29, 2009 at 2:09 am - Reply

    I’m skeptical of all the arguments based on operations-per-second, when we seem so relatively ignorant of which operations to perform. Who believes that just simulating a sufficiently large mass of virtual neurons will produce intelligence? Evidence for this position? And copying a brain: This surely requires (a) technology, presumably of the nano-persuasion and not primarily computerish, for reading the detailed structure of a brain and (b) a brain to read, in what seems likely to be a destructive process… Or we’re going to download and simulate the brain of a person who has just died? Controversial *and* dubious, a little?

    Did the Wrights have to wait until powerplants evolved to meet their specifications? (OK, yes, a little.)
    Did Alexander Bell have to wait for big enough batteries or pure enough carbon? Most inventions are based on a novel combination of existing parts and materials, often based on a counter-intuitive belief, or careful development of new understanding – often enough, the parts and materials have been available for years, or decades, sometimes for centuries. The parts and materials are not enough! Where’s the graph that quantifies our progress toward ‘intelligence’, as opposed to ‘enough gates’?

  10. Anonymous June 4, 2009 at 2:09 am - Reply

    Recently I discovered your website and have been following along steadily. I felt I could give my opening comment. I dont know what to write but that I have really loved perusing. Interesting website. I will keep coming back to this blog now and again. I have also got your rss feed to get any updates.

  11. Anonymous June 6, 2009 at 6:35 am - Reply

    It think that AIs that function in non-human ways will be developed long before exact brain simulations – once AIs start to be commercially useful they will outstrip the performance of the human brain without ever coming close to being human, and make brain simulation unnecessary. Simulation of the human brain is just one project within AI – and human brain intelligence is only one type of intelligence (like walking is only one limited form of locomotion) – why recreate something so limited? I want computers to do things I can never do, not recreate the intelligence of an average Joe, who, lets be frank, is pretty dumb (even if he is the peak of evolved biological intelligence).

Leave A Comment