In the terminology I introduced in Beyond AI, all the AI we have right now is distinctly hypohuman:
The overall question we are considering, is AI possible, can be summed up essentially as “is diahuman AI possible?” The range of things humans can do, done as flexibly as humans can do them, and learned the way humans learn them, is as reasonable definition of intelligence as any. This is reflected in the “Wozniak Test” and the “Nilsson Test”, i.e. the ability to do human jobs. (If nothing else, this obviates at least one other question, namely, at what point will AI have a major economic impact?)
The problem is, people have been claiming that their robots could do things like the Woz test for quite some time:
From the marvelous Paleofuture blog, an advert for a robot maid in 1930! (Not exactly, read the blog)
Today, these things are getting closer to reality:
which is a lot closer to reality than the previous one — there’s a $3M/yr project behind it at the Korea Institute of Science and Technology.
Even so, I doubt that Mahru-Z or Willow Garage’s PR2 or any other existing robot could come close to passing the Woz test, much less the full Nilsson Test. On the other hand, I think it’s pretty clear that over the past couple of decades there has been a very strong advance in robotic capabilities and, IMHO, it bids fair to make robots usable in another decade and skillful in another one after that.
How about thinking and learning? This is really the crux of the issue; the Woz test is simply to sum up the complexity and adaptability necessary in a simple description. Nobody is putting the processing power necessary to do serious AI into mobile robots. What the robot example shows is that for specific skills, the state of the art in programming is pretty close to being able to program what a typical person could learn.
The structure of intelligence can be broken down into a set of skills, ranging form pouring coffee to doing integration by parts; meta-skills such as recognizing which skills are appropriate when, and planning with them; the ability to learn new skills, including meta-skills, both from imitation and by inventing them. (Skills of course include recognizing and understanding things as well as doing things.)
Note that we’re well into the useful range if the AI can only learn by imitation or being taught, and never does anything particularly creative or original. So for the lowest level of AI all we need is to program up all the basic skills we need and the ontologies — datastructures for knowledge representation — for the AI to learn some kinds of new things, or at least be reasonably adaptable. It would clearly have a built-in “glass ceiling” over what kinds of thing it could learn, but then so do quite a few people.
One fairly good overview of the kinds of skills and meta-skills can be programmed with current techniques is the leading textbook, Russell and Norvig’s Artificial Intelligence: A Modern Approach. Just look thru the table of contents… If this thousand-page epic tome is light in any area, it would be the problems of inferring formalizations from unstructured data — but there’s a lot of work on that in the real world pursuits like data mining where people are trying to take advantage of the treasure trove represented by the internet.
Bottom line: I think we have the techniques now to build an AI at the hypo/dia border, equivalent to a dull but functional human. It would have to run on a smallish supercomputer — say one rack full of servers stuffed with GPGPUs. The problem is that it would take a huge, coordinated project to implement all the techniques and skills that are understood into a single integrated system, and AI in practice is a cottage industry. Right now that’s not economically feasible, given the cost vs the economic value of one more dull human. But those things will shift during the coming decade — the hardware will get cheaper, the software more sophisticated, and quite possibly by 2020 the economics will look different. Then and only then will AI really take off.