So we will take it as given, or at least observed in some cases and reasonably likely in general, that AI can, at the current state of the programming art, handle any particular well-specified task, given enough (human) programming effort aimed at that one task.
We can be a bit more specific about what “well-specified” means. In general, if the task has a static ontology that can be laid out by the programmers, it’s within the scope of current practice. A huge part of the progress of early AI was in fact simply building up (hand-made) ontologies. An ontology includes, BTW, not just a list of concept names, but the semantics: code to recognize, predict, simulate, and perform whatever things we’d expect a person to be able to do who we would describe as “understanding” the concept.
The difference between this “static AI” and real human-level intelligence is that people learn new concepts constantly. We will learn several words a day our entire lives (estimates range from 1 to 10 and of course this depends on individual intelligence and environment). Concepts are constantly changing and growing, splitting and merging, being half-forgotten and rediscovered.
Not only the ability to create new concepts, but the fluidity and adaptability of the ones we already have, enable the robustness of human intelligence.
There’s been a lot less research on how to build concepts than there has been involving the formalization of existing ones in static form. There’s a bias toward the latter since you get a machine that can do something useful much quicker that way.
However, there has been research in creating new concepts and we can say something about it. It seems to be the area where the high computational resources make a difference. The most general approach we have is search, in various forms. Deep Blue invented startling new chess strategies on the fly. These robots evolved a number of concepts through simulated evolution.
(ps — if you want your research paper to be picked up by the pop-sci news and blogosphere, simply include the words “robot” and “predator” in it 🙂 )
Going back to Lenat’s AM, it’s been understood that search, in various forms, is capable of the kind of learning we need, but also that it tends to run out of steam sooner rather than later. In other words, it seems likely that a properly set-up search is capable of inventing a fairly sophisticated concept, but you need another setup for the next one. It’s generally accepted that some sort of evolutionary search is going on in the brain, but the system that controls it, sets up the search spaces, defines the fitness functions, and so forth, is definitely not well understood.
Thus the key to understanding when and whether general AI can happen lies in the high-level organization that can guide the application of focused search to produce a growing set of concepts that work coherently together.