Human Level AI

Human Level AI

Accelerating Future » World Future Society 20 Forecasts for 2010-2025.

Michael A is mildly skeptical about World Future Society claim we’ll have “human-level AI” by 2025.

This caused me to think about whether I believed it myself. I think the answer depends on how you define it. I think AI is going to be really big over the 20s — maybe like the internet in the 90s. But what will this mean for the real world, as opposed to inside the AI labs?

Is an AI the equivalent of a 2-year old, “human equivalent”?  If I had an AI right now that was the equivalent of a 2 year old, I’d have one in 2025 that was the equivalent of a 16-year-old.  That would be “human equivalent” in most people’s book.

I’m fairly certain that we’ll have AI that’s capable of a wide range of human tasks by 2025 — housemaids, butlers, chauffeurs, police and security guards, lots of desk and sales jobs, etc.  What remains to be seen is whether it will be equivalent to the 2-year-old in that essential aspect that it will learn, grow, and gain in wisdom as it ages.

Many humans, on the other hand, don’t learn or grow all that much once they get to adulthood.  A world of robots that were programmed to be competent at their jobs, but not to learn much, wouldn’t be enormously different from our current one.

So it depends on your definition.  If you’re OK with calling a robot human equivalent if it can, say, do everything a janitor is supposed to, it’s likely by 2025; if it has to be able to create art and literature and do science and wheel and deal in the political and economic world and be a productive entrepreneur, you may have to wait a little bit longer.

By | 2017-06-01T14:05:22-07:00 September 16th, 2009|Machine Intelligence, Nanodot|23 Comments

About the Author:


  1. Tim Tyler September 16, 2009 at 3:09 pm - Reply

    “Human-level” intelligence is a poor-quality concept – since humans intellects vary considerably – and also because it may not be as useful to measure machine mental capabilities on a one dimensional scale as it is to measure human intellects that way.

    This point seems to be poorly appreciated by those who think something special will happen when machines match the human intellect.

  2. […] ON WHEN WE’LL SEE Human-Level AI. “So it depends on your definition. If you’re OK with calling a robot human equivalent if it […]

  3. Samantha Atkins September 18, 2009 at 2:43 pm - Reply

    If you mean by human level no more raw power than human mind in terms of what sorts of concepts and complexity of problem the AI can handle that still leaves a possible many order of magnitude speed difference from difference in substrate switching speed – assuming equivalent levels of parallelism. If you are saying that the overall capacity including speed is human, even genius level human and no more then I begin to wonder whether the exercise has been worthwhile. If you can clone knowledgeable AIs of this kind with their knowledge intact or have them gain fairly instantly (by regular human standards at least) knowledge from other such AIs then the exercise is still worthwhile. But if they can’t do any more than a human in unit time and they can’t be cloned easily and don’t learn/absorb knowledge any faster then the enterprise would be a great YAWN. Yes they can work 24 x 7 and supposedly for nothing more than power and maintenance, but that would be all the gain left. More than nothing but not singularity.

    Actually I think that if we have human equivalent AI (and not limited to one area either) then we will skip right over that to greater than human AI very quickly. Generally I think that shooting for something “like a human”, especially emulating the human brain, is a pointless limitation in our thinking about intelligence and how to implement it. I expect greater than human intelligence by 2025.

  4. rhhardin September 18, 2009 at 6:29 pm - Reply

    Artificial intelligence is the field with the most continuous promise since 1950.

    Each successive generation of young males doesn’t see why it can’t work.

  5. SMSgt Mac September 18, 2009 at 6:33 pm - Reply

    Define human intelligence first. Then we’ll talk about if and when ‘comparable’ artificial intelligence will be achieved.
    I’m with Penrose in his ‘Emperor’s New Mind’ on this one.

  6. hitnrun September 18, 2009 at 7:14 pm - Reply

    “”Many humans, on the other hand, don’t learn or grow all that much once they get to adulthood. A world of robots that were programmed to be competent at their jobs, but not to learn much, wouldn’t be enormously different from our current one.””

    That’s a wild oversimplification for editorial purposes. Humans – or nearly all of them – learn and grow every day. If you hire a 60 year old woman to be your desk clerk, she will will be much better at the job and wiser to its subtleties on day 30, day 100, year 1, and year 5 than she was at each previous waypoint.

    It’s also a bit misleading to say that human intellect “varies.” While technically true – there are statistically significant numbers of mentally disabled – there’s not a whole lot of difference between most humans when we’re talking about the basic capacity for reason. Psychological factors like prejudice (theirs and yours), ignorance, and limitation (internal or external) are far more responsible for the “stupid” humans you observe than raw gray horsepower. That is to say: a human janitor is vastly the intellectual superior to the first robot who will be able to eclipse his job performance, no matter how little you might think of the TV he watches when he goes home.

  7. A.C. September 18, 2009 at 7:19 pm - Reply

    Thirty five years ago, Marvin Minsky, a professor of computer science and AI researcher at MIT predicted that we would have human level AI in “four to four hundred years.” The above pretty much confirms his prediction.

  8. Toads September 18, 2009 at 8:02 pm - Reply

    Suppose that AI can reach the equivalent of a human IQ of 70 by Year T

    Then it would reach 100 by Year T+2, perhaps

    And it would reach 140 by Year T+5, perhaps.

    So half of all humans would be surpassed by year T+2, and almost all humans by year T+5.

    So the Year T, whether it is 2025 or 2035 or 2050, is far less important than the fact that once a human of IQ 70 can be duplicated, it would not take long to quickly overtake all humans.

  9. Tom Maguire September 18, 2009 at 8:36 pm - Reply

    if it has to be able to create art and literature and do science and wheel and deal in the political and economic world and be a productive entrepreneur, you may have to wait a little bit longer.

    And if you want it capable of commenting at other blogs, well, get back to us Monday…

  10. Toads September 18, 2009 at 9:50 pm - Reply

    If a janitor can be duplicated by 2025…

    That means a compelling sexbot can also be duplicated by 2025 (a hot stripper is not smarter than a janitor). The Japanese are surprisingly far along in the path towards this.

  11. David Govett September 18, 2009 at 11:08 pm - Reply

    Bah, humbug! You linear extrapolators seem to be unaware that runaway AI will be upon us within one generation. Even now, there are projects to model the human brain, and unless you believe there is more to cognition that electrochemistry, there seems to be no insuperable obstacle. Once freed from the confines of the human skull, the brain will become more highly parallel, faster, and, soon thereafter, self-modifying. After that, we’ll be toast.

  12. Joel September 19, 2009 at 12:42 am - Reply

    ” if it has to be able to create art and literature and do science and wheel and deal in the political and economic world and be a productive entrepreneur, you may have to wait a little bit longer.”

    I don’t know about politics. economics and science, but as far as art and literature, computers have been creating that for ten years already (as have chimpanzees). Perhaps it will take some time for them to measure up on the pretentiousness.

  13. Slocum September 19, 2009 at 4:18 am - Reply

    “I’m fairly certain that we’ll have AI that’s capable of a wide range of human tasks by 2025 — housemaids, butlers, chauffeurs, police and security guards, lots of desk and sales jobs, etc.”

    And I’m fairly certain that we won’t even be close in 15 years. Not only do all those positions require human language and vision capabilities (which are both far, far beyond the abilities of current AI systems), they also require a great deal of human judgement (police?!? With guns?!?).

    No, that pattern has been, and will likely to continue to be, that we will have artificial systems that can do a limited (but useful) subset of tasks that humans can do — but only under controlled conditions and in a ‘brittle’ way that shows little of human flexibility and robustness. So, OCR is incredibly useful — but the scans have to be clean, straight, and hi-resolution. Humans can read mildly degraded text easily that OCR systems fail at. But that’s OK — OCR is still extremely useful. Similar situation with natural language translation — artificial systems don’t do it the way humans do, and they make absurd errors that no human translator would make, but they are very useful in providing a rough, first cut.

  14. Craig Zimmerman September 19, 2009 at 5:30 am - Reply

    “…if it has to be able to create art and literature…” What would be remarkable wouldn’t be the ability to create art and literature, it would be the desire, the overwhelming internal need to create. which would signify a breakthrough.

  15. […] Instapundit, the Foresight Institute discusses a World Future Society forecast predicting human level AI by […]

  16. Valerie September 19, 2009 at 8:02 am - Reply

    “…everything a janitor can do”? I question that one, even if it is limited to cleaning, which covers a hell of a lot more than vacuuming and mopping, much less maintenance.

  17. Sean Ryan September 19, 2009 at 8:04 am - Reply

    An interesting question is what this does to the market for unskilled and semi-skilled labor. Manufacturing jobs of this kind have largely moved offshore and will ostensibly continue to do so. Unskilled workers have managed to retain some economic bargaining power in jobs that cannot be moved offshore, however. The aforementioned janitor’s duties can’t be done from China, nor can DMV clerks do their jobs from India. Indeed, a huge proportion of government jobs seem designed to fund middle-class lifestyles for those without the skills to earn them.

    If AI advances within the next 15 years to the point that a large and increasing proportion of unskilled (and typically unionized) jobs can be done by machines, then the implications are as dire for unskilled workers (and labor unions) as they are positive for society as a whole.

    Andy Stern’s SEIU has been the fastest-growing union in the nation for years due primarily to organizing just this class of worker. One wonders: will the purple-shirted thugs at recent health care protests cease to exist, or will they be replaced by robots, too – Andy Stern’s own private army of Obamaist Cylons.

  18. TheRadicalModerate September 19, 2009 at 8:46 am - Reply

    Does “human-level” mean “acts like a human” or does it mean “processes the same amount of data as a human but doesn’t necessarily act human”? The answer to this question depends on which of the two AI camps you fall into.

    On one side, you have the knowledge-based expert system / inference engine / symbolic processing folks, who think that the key to getting a machine to think and act like a human is to model the structure of knowledge by adorning symbols with relationships and properties, then throwing compute cycles at that massive data structure until you can process enough of it to start making human-like responses. This approach has produced some very useful systems that are in wide use today, but I’m skeptical about it ever producing anything that acts like a human being.

    On the other side, you have the neural networking folks, who care very little about the structure of knowledge and view it as an emergent property of larger and larger networks of self-organizing pattern recognition systems. The neural network people have the advantage that, to a certain extent, they don’t have to worry about the structure of knowledge. They merely have to mimic something like the structure of the brain and they’re likely to get interesting results.

    The other advantage of the neural approach is that it’s very easy to model when you have the same level of computing power as the brain. When you can simulate a hundred billion neurons, all connected together via about a hundred trillion to a quadrillion synapses, you’re there, to some degree.

    We’re just not that far off from being able to do that simulation. (In fact, I have a spreadsheet that says that you could probably build such a system today if you were willing to throw hundreds of millions of dollars at the problem and maintain a network of about 100,000 fiber-optic cables.) But the trick to such systems is that we don’t know quite enough about all the various ways that neurons process synaptic information and, once we know that, we don’t know enough about how the brain accomplishes various functions through different local patterns of connections and, maybe even more important, how those local, limited-function regions connect together to produce the flexible system that is the human brain.

    So it’s quite possible that, long before we can produce something that acts like a human, we’ll be able to produce systems that do incredibly useful work, but which behave more like insane humans, or even something that’s completely alien but pretty smart. Given that proviso, I think that 2025 is an entirely reasonable date.

  19. Tristan Yates September 19, 2009 at 10:51 pm - Reply

    When I was 13 I was interested in AI. I read all of the books and within about six months understood why advanced machine intelligence was impossible. Bottom line is there’s no functional theory of mind. I love how in movies AIs are both master problem solvers and relentless automatons, as if there were no conflict between the two modes of operation. People don’t do what they are told, sometimes for very good reasons, and sometimes for very bad reasons. Why would we expect AIs to be any more capable and reliable than individual humans? Will the AI lock its muscles and dream like we do? Show me the spec and source code for a human brain, all one hundred billion neurons, and maybe I’ll think differently.

  20. Mario September 21, 2009 at 9:13 am - Reply

    Let me know if one can buy a microprocessor with a trillion transistors and 3D interconeations no more then a nanometer wide. Then forget about contemporary cpu design. If that day comes, I will tell you what the next step is. To simulate intelligence is one thing, to create intelligence is madness. Who will want to do that? Of course, it will be the ultimate human creation – literarely ! Will it ever happen? Yes. In the next 15 years? Don’t think so. In my life time? (next 50years) I don’t hope so. One think is sure – it will happened? Why? Because no ET bothered to contact us although we begin to realize, they are out there for sure. If you don’t understand how I came to this conclusion, just too bad 🙂 If you are jung enough – one day you will . . .

  21. James Hoppe September 24, 2009 at 9:19 pm - Reply

    A strong AI breakthrough, while inevitable in some timeline, is too much like SETI (the Search for Extra Terrestrial Intelligence) and research missions in space or the oceans, to get the funds. There does not appear to me any hurry to build tools for global good. These projects exist, but get short thrift by the budgets. The search for salvation from space, ocean, or artificial intelligence seems to me to have little relevance in a world where the first, and missing step is obviously to care very much to save ourselves, which we seem in no hurry to do.

    Who decides what gets built in the world? And when?

    Obviously an energy machine or a computing machine design are the best hopes for a technological breakthrough to save us all, but efforts toward “good” problem solving projects such as global warming, global hunger, and a sustainable energy have historically been starved. Why should AI be any different?

    Unless there’s a new way of thinking dawning, there is simply not enough money to build strong AI by 2025. There is enough science. There is enough data. There are enough words. I bet the money isn’t there.

  22. complementaire sante July 28, 2010 at 9:49 am - Reply

    Hello , Tr?s interessant Je traite du m?me sujet sur mon blog. Je me permettrai de m?inspirer de votre texte. En vous citant bien sur et si vous le permettez. Je parle aussi de sujet comme mutuelle et garantie hospitalisation ou comme mutuelle pour jeune. Merci, Alfie

  23. […] highlights this little article on Artificial Intelligence where J. Storrs Hall writes the following: If you’re OK with calling a […]

Leave A Comment