In response to my Robo Habilis post, Tim Tyler replied:
An intelligence challenge should not involve building mechanical robot controllers – IMO. That’s a bit of a different problem – and a rather difficult one – because of the long build-test cycle involved in such
projects.
There are plenty of purer tests of intelligence that use more abstract ideas – games, puzzles, and other classical intelligence test fodder.
If you want to measure the abilities of mechanical robots, then fine, but let’s not pretend that it’s the same thing as measuring intelligence.
This is a fairly widely held view — there were a couple of researchers at the AGI Roadmap meeting expressing the same idea. If I understand him correctly, Minsky feels the same way. I believe, however, that it is not true.
To begin with, that was the reigning paradigm of the entire “golden age” of AI from the 50s through the 70s. Even Shakey the Robot had a bicameral control architecture: a body control program written in SAIL, and a cognitive engine written in LISP. It was strongly believed that the parts of thought that were hard for humans would be the hard ones to program, and that once we got those licked, building the lower-level body-controller stuff (or vision, or speech-to-text for the input) would be an afterthought, or at most a clean-up engineering exercise.
Over the course of the 60s, classic AI had a tremendous run of success, which is pretty neatly summed up by the work in Minsky’s “Semantic Information Processing.” They had programs that did games, puzzles, intelligence tests, arithmetic word problems, freshman calculus. The hard stuff. They were full of optimism, and predicted that AI would run to a successful conclusion, creating an artificial mind, in another decade or two. They had done the college student; how much more effort should it take to do a toddler?
They were wrong. The greatest lesson that came out of the Golden Age was that “the hard stuff is easy, and the easy stuff is hard.” Any toddler could recognize a dog in a picture; it would be three more decades before AI could get even close (and it’s still not really there yet).
The mind, it turns out, is like an iceberg — most of it is unseen to consciousness, below the waterline. Perhaps a better analogy would be that consciousness is like the legislature of a country, or the head office of a company. What they perceive is in reality only an executive summary of what’s really happening. What the early AI researchers had done was to build a “company” consisting only of the board of directors and secretaries, but no factories, no sales force, no middle managers, no shop foremen, and no labor force.
The brain was evolved as a body controller. Evolution typically takes a structure that works and copies and adapts it to the next task. Consider the increasing intelligence of animals as we work ourselves up the evolutionary tree towards the human: insects, reptiles, mammals, primates. At every level new and improved kinds of control, feedback, discrimination, planning, and learning are built into the structure — and it’s all still there forming the part below the iceberg, the real company outside the boardroom, of human intelligence.
The classic AIers at the Roadmap asked me, “Isn’t a blind paraplegic still intelligent?” and of course he is — but only because his brain still contains all the mechanism that was evolved to to the control and interpretation he now lacks.
The buzzword in current AI for the reason bodies are important is “symbol grounding.” This refers to philosophical theories of meaning among symbols in symbol-processing machinery, and a simplistic reading of it is that whereas SHRDLU doesn’t “really know” what a red block is, a physical robot that plays with them really does. Unfortunately, the term in common use is often taken as implying that there is some magical transubstantiation of meaning into symbols by virtue of having a physical body, and this isn’t right and obscures the real issue. The paraplegic still has meaning in his mind.
What has to be there is not the actual body, but the mental mechanism for controlling it — that allows the mind to imagine, predict, describe, and relate other concepts to the one said to be understood. Most of our higher-level concepts are drawn from, by analogy and blending, the basic (very large) set of concepts we have learned, by experience, on the shop floors of our minds as we interact with the real world over the course of our lives.
Could that interpretive, predictive, concept-building, etc, cognitive machinery be built another way than working up a controller for a humanoid robot body? Certainly. But there are two reasons to do it with a body: first, it’s most likely easiest that way. There are a lot of things we don’t know yet about how the mind works. There’s no reason to think that we have no more blind spots like the classic AIers did. Working with real robots will show us the gaps fastest.
The second reason is that once we get the brain built, if we’ve put it together in a rough semblance of the phylogenetic/ontogenetic sequence that the human mind is built, there’ll be a much better chance that its meanings will match ours. It will understand things the way we do (of course humans vary a lot in the way we understand things), and do things the way we do, and thus appreciate the way we do them, and vice versa. For example, the parts of the brain that control language and manual manipulation are strongly overlapped. Try to teach your robot sign language without a similar structure and it will never get the “accent” right. Nor, unless it has the same kind of manipulation control to borrow, will it ever be as fluent in English as a human.
Separating “intelligence” from the rest of cognitive function is a false dichotomy, and one that has led AI astray — in a big way — before.