HEPP: Human Equivalent Processing Power

In Beyond AI, my book about the future of artificial intelligence and machine ethics, I made a prediction about how much processing power would be needed for an AI and how long it would take to get it assuming Moore’s Law:

You really can’t blame the early AI researchers for their optimism. It must have been inconceivable that a computer that tossed off differential equations with ease didn’t have the raw horsepower necessary to play with children’s blocks. Vannevar Bush had predicted that “electronic brains” would have to be the the size of the Empire State Building and require Niagara Falls to cool them. Given the computing technology he was using for the estimate, he was being conservative. Nobody at the time really had a clue how much computing power the human brain actually packed.

The retina of the human eye consists of a layer of receptor cells, the rods and cones, which detect light. Then there is a layer of neurons that do some preprocessing to the image, before it is sent down the optic nerve to the brain. Carnegie Mellon roboticist Hans Moravec estimates that just the preprocessing, in the eye itself and not even in the brain yet, requires the equivalent of a billion operations per second computing power. This is ten times the power of the original Cray-1 supercomputer (ca. 1977).

The human eye isn’t terribly well optimized. Evolution got stuck somehow with the preprocessing neural circuitry in front of the sensor array; the light has to go through it before hitting the rods and cones. (And then the optic nerve has to go back through a hole in the retina to get out, which is why human eyes have a blind spot.) In other words, there is a little slice of brain, with the power of a supercomputer, in each of your eyeballs, that is so thin that you see right through it and have never noticed it is there.

Compare that with the bulk of the brain to get an idea how much computing power there is behind your eyeballs.

The numbers involved in the structure of the brain, as well as the numbers involved in tracking Moore’s law of increasing computing power, are astronomical. It’s a lot easier to deal with them as logarithms. So when I give brains and machines power ratings below, the number will mean the exponent of 10 in operations per second: the Eniac, at less than 10,000 ips (instructions per second), rates 3.7; a classic Macintosh, at one third Mips (million ips), is rated 5.5; the retina preprocessor is rated 9; a current-day top-end PC with multiple processor cores can be in the neighborhood of 11; the Deep Blue chess-playing supercomputer could apply the equivalent of a 12.5 power rating to chess (but only chess); a $30 million IBM Blue Gene supercomputer is upwards of 14. The very latest supercomputer on the drawing boards, the “Roadrunner” to be built by IBM at Los Alamos, containing 16,000 multiprocessor “Cell Broadband” chips, should hit 15.

Oh, yes–and a human doing pencil-and-paper figuring is worth about minus two. But what’s the human brain, as a raw computational engine? It has up to 100 billion neurons (exponent 11), each with up to 10,000 connections (4), and firing up to 100 times per second (2). Since these are all exponents, we add to get the overall power rating of 17, but given all the “up tos” and the fact that the brain isn’t used flat out all the time (any more than any of your other organs), we’ll use 16.

So we need roughly a power rating increase of 5 from a current top-end PC to get to a machine that can simulate the human brain at the neuron-firing level. Moore’s law can be stated as saying that computers gain a rating of 2.5 per decade at the same price level, so this puts us in the late 2020s for cheap human-level computation. Note that Ray Kurzweil has consistently predicted 2029 as the year to expect truly human-level machines. Kurzweil’s estimates are based on the notion that neuron-level simulation will be needed, and that we’ll have to copy the circuit diagrams of actual brains at some fairly low level, to get true AI. Kurzweil’s estimates can be thought of as the conservative baseline, with every advance made the hard way. Let’s call a power rating of 16 a “Kurzweil Human Equivalent Processing Power” or Kurzweil HEPP.

Other estimates are more optimistic. Moravec, for example, assumes that there are plenty of computational functions that the brain does the hard way, which we can finesse with different architectures and algorithms. He bases his estimates on actual computational implementations of known cognitive functions, such as the processing in visual cortex. He has estimated that a machine could duplicate the brain’s higher-level functions at ratings of 13 and 14 in his books. 14 is the later figure, but we’ll average them with the same “doesn’t run flat out most the time” logic as before, and refer to a 13.5 processing power level as a Moravec HEPP.

And finally, Marvin Minsky keeps insisting that the processing power we have now is adequate for AI. To keep things simple, we’ll use a power rating of 11, a top-end current PC, as a Minsky HEPP.

These HEPPs will be of interest later when we try to form our own estimate of when we should expect AI.

Moravec argues that through the 70s and 80s, there was a slump in funding that counteracted Moore’s Law, so that the processing power available to AI researchers remained constant over the period, and that was what formed the glass ceiling.

The lack of computational horsepower available to early researchers had a farther-reaching effect than merely restricting them to small problems. It seriously biased the overall approach to favor algorithms that ran quickly on serial von Neumann computers. The push toward practical applications in the ’80s only exacerbated that bias. The result was, and still is, a huge amount of effort wasted on premature optimization. It will be time to optimize the algorithms for general intelligence when we have ones that work, which we do not at present.

That was in 2007. Just two years later, things look more different than I would have expected. I had predicted $1000 Moravec-scale machines–my own best guess at what it would take–in the 2020 timeframe. I hadn’t reckoned on the remarkable advance in GPUs and GPGPU programming.
gpu super

These things now put a couple of teraflops in the roughly $500 range, which puts a Moravec-level machine within reach of a hobbyist. I’d be happy to try to implement an AI on some recent home-built supercomputers, at least as far as the raw MIPS was concerned.

Leave a comment

0
    0
    Your Cart
    Your cart is emptyReturn to Shop