So far, in making my case that AI is (a) possible and (b) likely in the next decade or two, I’ve focused on techniques which are or easily could be part of a generally intelligent system, and which will clearly be enhanced by the two orders of magnitude increase in processing power we expect from Moore’s Law by 2020. (Note — we certainly don’t have to wait till 2020 to find out. Existing hardware is well into the usable range, probably for less than $1M. But you don’t get too many researchers, and no hobbyists, doing their research on machines like that today. You will in 2020.)
To make a heavier-than-air airplane fly, you need an engine. If you have an airframe with lift-to-drag ratio r, stall speed s, and weight w, and a propellor with thrust efficiency e, you need an engine with power p=swr/e to fly. Power<p, no fly. Power>p, fly.
Both of the major American flying machine efforts understood this. Langley spent huge effort developing light, powerful engines. The brothers Wright built their own aeroengine from scratch in their bicycle shop.
The difference was, the Wright brothers knew an extra Good Trick, which was how to control the plane in the air once it was flying.
So to develop a working AI, we need the power, which we don’t think is going to be a problem. We need the lift, which is the kind of techniques found in narrow AIs and discussed above. And finally we need the control.
What I just said is an example of reasoning by analogy. To an extent much greater than usually realized, most cognition and reasoning is based on analogy. When you perform a physical skill, the specific sequence of sensory and motor signals is never exactly any of the ones that happened during practice; but they’re close enough that the mapping is straight-forward.
This is something that is well-known to the AI mainstream:
But “the big feature of human-level intelligence is not what it does when it works but what it does when it’s stuck,” Minsky said. When faced with novelty, Minsky claims, human intelligence applies “reasoning by analogy” to make the most direct tap into the cognitive glue that fuses knowledge domains.
Reasoning by analogy is a way of adapting old knowledge, which almost never perfectly matches the present situation, by following a recipe of detecting differences and tweaking parameters. It all happens so quickly that no “thinking” seems to be involved. (EE Times)
The particular kind of reasoning by analogy that would make an associative memory machine work well can be called analogical quadrature. This is the form of problem done most famously by Melanie Mitchell’s Copycat program: you have three things A, B, and C, and you want to find a fourth D such that A:B::C:D. In the associative memory scheme, you need to do not the actual action you did in the memory, but the action that fits the current situation the way the remembered action fit the remembered situation.
As a simple example, if the remembered action was done by someone else, the parallel could be mapping things so that the action is done by you this time. In other words, analogical quadrature enables imitation.
If you can somehow represent your concepts as points in an n-dimensional space, analogical quadrature is falling-down easy: D=C+B-A in ordinary vector algebra. Of course, sometimes the mapping into n-space is problematical, and we are thrown back on symbolic methods such as those of the FARGitecture.
Those have their own problems, essentially the same ones as any symbolic AI: the operations and ontology in, e.g., Copycat are all idiosyncratic and hand-coded, and there’s no clear way to build a learning machine that extends them automatically.
I’ll go out on a limb and guess that the ultimate solution will involve elements of both extremes. Search will be needed both to find new operations for symbolic formulations, and to find appropriate mappings into n-space for the subsymbolic ones. A few key insights — new Good Tricks — will be necessary to unify the known methods and give us a solid understanding of, and engine for, analogical quadrature. That’ll be a huge step towards general AI.