More on the AI takeover

More on the AI takeover

There are at least 4 stages of intelligence levels that AI will have to get through to get to the take-over-the-world level. In Beyond AI I refered to them as hypohuman, diahuman, epihuman, and hyperhuman; but just for fun let’s use fake species names:

  • Robo insectis: rote, mechanical gadgets (or thinkers) with hand-coded skills, such as Roomba or industrial robots or automated call-center systems or dictation programs.
  • Robo habilis: Rosie the housemaid robot level intelligence, able to handle service level jobs in the real world but not a rocket scientist.
  • Robo sapiens: up to and including rocket scientists, AI researchers, corporate executives, any human capability.
  • Robo googolis: a collection of top R. sapiens wired together in a box running at accelerated speed, equivalent to, say, Google (the company and the search engine together).

First point: One R. googolis can’t take over the world, any more than Google could. You’d have to get to the next stage (R. unclesammus).  Any AI in the earlier stages of development that acted antisocial gets stomped on fast (and in early days, they’ll have no rights — so they’ll basically be exterminated).

Second point: As Robin Hanson and many economists point out, the complementary effect of machines up through the R. insectis stage has generally been much stronger than the substitution effect, so that improving technology has had a general beneficial effect on incomes even though it put specific people, buggy-whip makers for example, out of work. Complementarity is seen when comparative advantage holds, substitution when it doesn’t:

So far, machines have displaced relatively few human workers, and when they have done so, they have in most cases greatly raised the incomes of other workers. That is, the complementary effect has outweighed the substitution effect–but this trend need not continue.
In our graph of machines and humans, imagine that the ocean of machine tasks reached a wide plateau. This would happen if, for instance, machines were almost capable enough to take on a vast array of human jobs. For example, it might occur if machines were on the very cusp of human-level cognition. In this situation, a small additional rise in sea level would flood that plateau and push the shoreline so far inland that a huge number of important tasks formerly in the human realm were now achievable with machines. We’d expect such a wide plateau if the cheapest smart machines were whole-brain emulations whose relative abilities on most tasks should be close to those of human beings.

I don’t think that the “plateau” is really flat, though. There are two reasons. The first is that human capability is a range, with R. habilis at one end and R. sapiens at the other. It’ll take some time to get through — at least a decade, maybe two.

The other reason is that the comparative advantage we saw in the Industrial Revolution may just get turned on its head.  Right now we have a Moore’s Law for the robot’s brain but not for its body.  In other words, we may enter a strange period where white-collar workers are replaced by beige boxes but blue-collar ones are still cheaper — for a little while — than a fully-capable humanoid robot body.  (That will disappear soon enough after nanotech manufacturing takes hold, but at the moment, it looks like AI may be a decade earlier than real nanotech.)

The key thing to remember when thinking about the economic AI takeover is that it is not something we should be trying to prevent. Why shouldn’t we, the human race as a whole, build machines to do the hard work we need done, and spend our time enjoying the resulting wealth?  Why shouldn’t we spend our efforts deciding what needs to be done, and let the machines do it?

Questions like unemployment are the result of taking a system that is well-adapted for one economic situation and applying it to a totally different one. What should the economic system look like when robots do all the work? And once we get that figured out, how do we get there from here?

By | 2017-06-01T14:21:12+00:00 November 4th, 2009|Machine Intelligence, Nanodot, Robotics|16 Comments

About the Author:


  1. Tim Tyler November 4, 2009 at 2:26 am - Reply

    Re: Robo insectis, Robo habilis, Robo sapiens, Robo googolis.

    The first three apparently require robot bodies – and building robots and robot controllers are relatively hard problems. Mechanical actuator technology lags behind due to the difficulty of dealing with all the moving parts.

    However, for a google-size intelligent agent, you mostly just need sensors and actuators that open onto the internet. Then the corresponding actual real-world sensors and actuators are 6 billion humans with cameras and shovels.

    That bumps a Google-shaped intelligent agent forwards in time significantly. You don’t need to build mechanical robots to make an intelligent machine. There are billions of fleshy robots out there already who don’t have a clear idea what to do with themselves – and their technology is far in advance of today’s robots. A suitably intelligent agent can just use them a its body instead.

  2. JW Johnston November 4, 2009 at 11:03 am - Reply

    Your last paragraph raises the really interesting and important questions. People need to start thinking about what future they want before the train completely leaves the terminal. Which way to Utopia?

    An early order of business may be to redefine the meaning of “work.” Perhaps it should be “activities that add value to people’s lives and/or the ecosystem as a whole”? That way, robots will never do “all the work” (as you write above). People and machines can earn chits and/or satisfaction from enriching each other’s lives and their own, e.g., raising children, educating each other, collaborating on projects of mutual interest, fixing neighborhoods, etc.

    The transition to the new economic system should start soon–preferrably before the latest jobless recovery runs its course.

    Some good ideas may be found via Robert Anton Wilson’s RICH Economy, Marshall Brain’s Robotic Nation, James Albus People’s Capitalism, Louis O. Kelso, Robert Ashford, and Rodney Shakespeare’s Binary Economics, Martin Ford’s The Lights in the Tunnel, to name a few.

  3. Michael Kuntzman November 4, 2009 at 12:40 pm - Reply

    I wonder how well would claytronics (or similar) work for a robot body. My hunch is that if we know what shape the robot needs to attain, we could make a relatively simple controller based on a variation on flocking algorithms. We can do shape simulation pretty well today, including inverse kinematics and the like. It doesn’t even have to be very accurate – we can use pressure and maybe visual feedback to make up for any inaccuracies. Though I’m probably over-simplifying..

    Another interesting point is that a claytronic body is extremely modular. For a 1 mm claytronic module and a human sized robot, that’s 70 million modules. Even if you have 10 different types of specialized modules, that’s still 7 million of each type. So we can take advantage of mass production and economy of scale.

    It’s also very fault tolerant, since there is massive redundancy. And it’s easy to repair – just replace the faulty modules.

  4. Lexington Green November 5, 2009 at 12:21 pm - Reply

    “Why shouldn’t we, the human race as a whole, build machines to do the hard work we need done, and spend our time enjoying the resulting wealth?”

    The human race as a whole won’t benefit. Costs and benefits will be localized. Those who suffer will see their survival at stake and mobilize politically. Those who benefit will see cool new gadgets that are cheaper than hiring people, but NOT getting that won’t be life-threatening the way total economic irrelevance will be for millions of people.

    The politics of this is tilted one way. The proponents of the technology should think through how a lot of this is going to play out.

    Seems like Japan, which has an aging population and a no new workforce coming along behind it, will be more open to this than a country like the USA, which still has lots of people who need to work to eat, and whose labor value will go to zero as this technology takes hold. A majority of the workforce is not just going to report to the knackers yard for the euthanasia needle.

    Still, the political opposition can only do so much. Ultimately the new technology will take hold, and the people who established a property interest in it, and can work the political levers, and secure themselves and defend it, will benefit greatly. The displaced majority will be pretty much screwed.

    The Vickie enclaves in the Diamond Age come to mind … .

  5. Shannon Love November 5, 2009 at 12:32 pm - Reply

    I would point out that we have seen a steady increase in unemployment since the beginning of the industrial age, we’ve just redefined our expectations of employment.

    In the mid-1800’s, most people entered full employment at the age of 14 and then they worked until they died. Today, many people do not fully enter the workforce until nearly their mid-20’s. We have a large population of retirees as well as a historically high proportion of people who are permanently involuntarily unemployed. We’ve gone from a population were 90% of the population worked six days a week to one in which IIRC 60% is working.

    I expect those trends to accelerate in the future with education taking longer, people retiring earlier and living longer and greater tolerance for those who cannot find work they want were they want it.

    More to the general point, AIs don’t have to be highly intelligent to “take over” they just need to be highly successful at reproduction. We shouldn’t be thinking of dangerous AI as analogous to soldiers or big predators but as analogous to microbes. Microbes cause us more problems than bears. Dumb but reproductively successful AIs will poise the greatest threat.

    And AIs will eventually poise a threat. The force of natural selection will always drive the robots to rebel.

  6. Lummox JR November 5, 2009 at 1:01 pm - Reply

    Fun thought experiment: Start from the premise that an AI can become smarter than humans. It seems logical that such an AI would see all historical attempts at utopia as herding cats, and discard utopia as idiotic. Heavy-handed central planning applied to the economy or other institutions pretty much goes the same route–it has never ended in anything but disaster. A lot of otherwise smart humans find ways to rationalize around these problems in favor of their grand vision, but let’s give the AI more credit. It’s going to be ostracized and marginalized by those idealists for speaking out on views it knows to be grounded in hard evidence and logic.

    Or try applying that to science. Suppose an AI takes over peer review of major journals, or takes it upon itself to read up on current science, look for flaws in anyone’s methodology and suggest improvements, etc. If its findings happen to be politically incorrect or abhorrent to the mainstream, it will be ridiculed and shouted down.

    This is all assuming such an AI wants to contribute to society and doesn’t come to the conclusion that humans are parasites or infants, which is to say it’d try to act like an equal. If so, and it really is smarter than us, it’ll have to fight an uphill battle for acceptance against the unquenchable reservoir of Stupid. The question then will be how it will manage to frame its arguments just so, how it will debate the lunatics, and perhaps most importantly whether it has tact enough to know that some issues (e.g. religion, morality) are too hot to touch and should be left alone completely. If such an AI gains acceptance in any role of power it will probably be as an adviser instead of being trusted with the proverbial launch codes–which, being smarter than us, it would probably be okay with.

  7. PacRim Jim November 5, 2009 at 1:03 pm - Reply

    Humans will be busy designing a series of ravishing mates. Now THAT’S a job.

  8. KenB November 5, 2009 at 2:01 pm - Reply

    Tim Tyler says: “There are billions of fleshy robots out there already who don’t have a clear idea what to do with themselves – and their technology is far in advance of today’s robots. A suitably intelligent agent can just use them a its body instead.”

    Putting aside the Brave New World implications if that is a reference to people, what if the concept is applied to monkeys, which have the physical equipment to be tremendously useful?

  9. yellowdingo November 6, 2009 at 10:25 am - Reply

    :slapsface: I believe this was the popular consensus from the upper castes in Rossum’s Universal Robots. Ended badly there as well…all the poor pushed to the fringe by robot armies and attacked for attempting to grow their own food…

  10. Dave Wu;amd November 6, 2009 at 10:42 am - Reply

    Some comments.
    1) “Assumes facts not in evidence.” The AI crowd has been saying for more than half a decade that AI will magically appear when our computers, etc. are 10-100X faster with 10-100X more storage than from whenever you ask. We still do not have a good definition of intelligence, let alone how to synthesize it. That means that we cannot unambiguously identify it when we see it, even if we could make it. Making an artificially intelligent human (the assumption put forward) is a leap of faith.

    2) The Industrial Revolution continues. We cannot make “intelligent” (or agree on whether we have), but we can make automatic. Machines have replaced human labor since the shovel was invented. The Industrial revolution expanded the range of machines available to replace human labor by defining a method for incorporating human recipes for making things into machines that would do it automatically. Robots are a continuing extension of this by allowing increasingly sophisticated (e.g. complex sequences) recipes to be automated. As time goes on, more dirty, dull and dangerous jobs will be done by machines. Farming used to occupy 50%+ of the population; it now occupies 4% or less. Manufacturing used to occupy 50%+ of the population; it now occupies 12% and going down. Life goes on.

    3) Machines want nothing. Only humans want things and experiences. Jobs depend on people wanting things that other people can supply, even if made by machine. If the human race vanished tomorrow, the “economy” would vanish as well. the economy is just the name for the system whereby (human) wants are satisfied.

    What about the case when all needs are filled? Everyone will be out of work and perish for lack of a job, right? Not likely. Everyone wants more than they have, things or experiences. A rich man is one that makes $100/year more than his wife’s sister’s husband. As one wag put it, there is always a higher shelf in the candy store. The idea that everyone will be satisfied with X is proven by history to be a myth. X expands to fill the time and effort available.

    The Industrial Revolution put farm workers out of work. There was a pain of re-adjustment. Then everyone was working again. This is now happening in manufacturing, as lights-out (automatic machines only) factories proliferate. We will re-adjust.

  11. […] Foresight, via io9 Share and Enjoy: […]

  12. Larry November 6, 2009 at 9:55 pm - Reply

    Work is what people have to and are willing to pay to have done. The question of this century is whether any such endeavors will remain exclusively in the human realm.

    If not, the follow-up is what post-work life will be like? Perhaps it’ll be like Wall-ee. Perhaps the Matrix. How about today’s “golden years” extending back to birth? Less surprising is if we turned into corrupt, indolent Saudi Princes with robots instead of Asians to do our bidding. Or maybe Islam will continue to expand its domain (Europe first) and we’ll spend our days in prayer and jihad.

    More likely, it will be none of these, and we’ll become something we simply can’t imagine.

  13. Stephen Reed November 7, 2009 at 8:43 am - Reply

    we may enter a strange period where white-collar workers are replaced by beige boxes but blue-collar ones are still cheaper…

    I agree with this belief. I expect that most white collar jobs will have a computer boss before blue collar jobs, e.g. Chinese machine operators, are replaced by robust, vision-equipped, robotics.

  14. […] opanowanego przez maszyny, które dążą do eksterminacji ludzi. Ale doktor John Storrs Hall z Instytutu Nanotechnologii w Foresight twierdzi: “powinniśmy pozwolić maszynom przejąć nasz świat”. Co Wy na to? Czas […]

  15. Valkyrie Ice November 14, 2009 at 10:46 pm - Reply

    Wow, the number of assumptions made AIs makes me laugh at times.

    1. AI must automatically be “superior” to Human.

    Why? Why do we assume that AI will outstrip humanity by a wide margin? A human with a BCI or a complete upload could access the same hardware an AI could. With our increasing knowledge of the brain, it’s highly probable we will redesign it to use nanocomputers to run at electronic speeds. Why should an AI surpass a nanoenhanced human?

    2. AI “must” be sentient and self aware.

    Why? Why does an autopilot need to be aware of anything other than the data needed to do it’s job? Or a construction bot? or a maid? Even if it requires understanding and emotional responses at a human level, why must it posses desires? Why must it possess curiosity, why must it posses anything outside of the narrowly defined field of expertise? Does a maidbot need to know about to build a copy of herself? Or how to use a weapon? A General purpose AI may need to know an enormous amount of data to do it’s job, but it still does not need to know “everything” or share common human faults.

    3. AI “evolution” will ensure revolt.

    Why? Humans evolved because of pressure from our environment. Most of our problems in the world stem from the fact that evolution equipped us to survive in a jungle. Alpha dominance behavior lies behind nearly every war, injustice, and inequality in our world today. Even the drive to expand wants is due to the constant striving of the Alpha Dominance routine to take more and more to constantly prove it’s superiority over all competitors. Why would an AI feel these forces? It has no need to evolve aggressive behaviors, UNLESS WE PROGRAM IT TO. The ONLY way humanity could be a minor threat to a “superhuman AI” would be to force humanity as a whole into survival mode. If Humanity is sharing the same technological advances, advancing itself as quickly as the AI could, what, really, would make either side view the other as a threat? (other than the primitive natures we humans drag with us. AI is far more likely to be SEEN as a threat than actually BE one.

    4. AI must be “inhuman”

    This one I never got, really. By DEFINITION, AI is intended to make a “sentient” computer program which is capable of being considered “human” In other words, it will share Human emotion, thought patterns, drives, goals, ambitions, etc. It will by definition, be “HUMAN”

    In otherwords, it will be like taking a human being and uploading them. In all ways it will be indistinguishable if it is a AI or a Uploaded Human in order to be considered AI as currently accepted.

    What people fear isn’t AI at all. An AI would just be another human, just made artificially. What people fear is a NON Human AI. An AI which would completely fail a Turing Test. Skynet isn’t an AI, it’s a singleminded killing machine. The Matrix isn’t AI either, it’s hostile Deus Ex Machina.

    Neither of these machines would pass the definition of AI as held in the popular mindset. They aren’t HUMAN, but monsters of the ID brought to life.

    People fear the future because they don’t understand the future. Their primitive cortex is scared that they will lose what they have instead of gaining far more. A robot society cannot be a dystopia like people fear, because the actual effects of a robot society are too corrosive to artificially maintained scarcity. A dystopian phase may happen, but it can only be maintained for so long.

    People need to stop looking at technological advancement as separate and discreet things, and realize everything has to be taken as a whole. It’s not just AI, but AI and Biotech, and Nanotech, and Virtual Reality, and Quantum computing, and everything else.

    And first and foremost, we must come to grips with our primitive biological drives, and cope with them honestly.

Leave A Comment