More on the AI takeover

There are at least 4 stages of intelligence levels that AI will have to get through to get to the take-over-the-world level. In Beyond AI I refered to them as hypohuman, diahuman, epihuman, and hyperhuman; but just for fun let’s use fake species names:

First point: One R. googolis can’t take over the world, any more than Google could. You’d have to get to the next stage (R. unclesammus).  Any AI in the earlier stages of development that acted antisocial gets stomped on fast (and in early days, they’ll have no rights — so they’ll basically be exterminated).

Second point: As Robin Hanson and many economists point out, the complementary effect of machines up through the R. insectis stage has generally been much stronger than the substitution effect, so that improving technology has had a general beneficial effect on incomes even though it put specific people, buggy-whip makers for example, out of work. Complementarity is seen when comparative advantage holds, substitution when it doesn’t:

So far, machines have displaced relatively few human workers, and when they have done so, they have in most cases greatly raised the incomes of other workers. That is, the complementary effect has outweighed the substitution effect–but this trend need not continue.
In our graph of machines and humans, imagine that the ocean of machine tasks reached a wide plateau. This would happen if, for instance, machines were almost capable enough to take on a vast array of human jobs. For example, it might occur if machines were on the very cusp of human-level cognition. In this situation, a small additional rise in sea level would flood that plateau and push the shoreline so far inland that a huge number of important tasks formerly in the human realm were now achievable with machines. We’d expect such a wide plateau if the cheapest smart machines were whole-brain emulations whose relative abilities on most tasks should be close to those of human beings.

I don’t think that the “plateau” is really flat, though. There are two reasons. The first is that human capability is a range, with R. habilis at one end and R. sapiens at the other. It’ll take some time to get through — at least a decade, maybe two.

The other reason is that the comparative advantage we saw in the Industrial Revolution may just get turned on its head.  Right now we have a Moore’s Law for the robot’s brain but not for its body.  In other words, we may enter a strange period where white-collar workers are replaced by beige boxes but blue-collar ones are still cheaper — for a little while — than a fully-capable humanoid robot body.  (That will disappear soon enough after nanotech manufacturing takes hold, but at the moment, it looks like AI may be a decade earlier than real nanotech.)

The key thing to remember when thinking about the economic AI takeover is that it is not something we should be trying to prevent. Why shouldn’t we, the human race as a whole, build machines to do the hard work we need done, and spend our time enjoying the resulting wealth?  Why shouldn’t we spend our efforts deciding what needs to be done, and let the machines do it?

Questions like unemployment are the result of taking a system that is well-adapted for one economic situation and applying it to a totally different one. What should the economic system look like when robots do all the work? And once we get that figured out, how do we get there from here?

Leave a comment

    Your Cart
    Your cart is emptyReturn to Shop