AI: Summing up

Let’s try to pull all the threads together, as futurists — which is the whole point here — and get some idea about when it might be reasonable to expect AI to show up.  When I say AI I want to look at the entire diahuman range, so the answer would still be a range even if we were historians looking back on the process from the vantage point of the far future.

I’ve claimed that “I think we have the techniques now to build an AI at the hypo/dia border, equivalent to a dull but functional human.”  That doesn’t mean we have one now, or even that one is possible next year.  What it means is that by the kind of techniques we can now use to program self-driving cars, we could, with a major development effort, program an AI that would be able to do as broad a range of that kind of task as a very dull human can, but which would need additional programming to do new tasks.

Commenter Alex Kilpatrick put forward a cogent objection to the “AI is near” thesis, writing:

All of the so-called gains in AI are still a million miles away from the “dull but functional human” There are some things like playing chess that computers do really well. And intelligent humans do those things too. But that in no way means the computer is remotely intelligent.
The whole AI field is nothing but clever programming. Some of those programs are quite clever indeed, but they represent the intelligence of their creators, not the programs. Some programs may appear intelligent in very narrow domains, but they are extremely brittle — they will not be useful at all even on the borders of the domains for which they were designed.

I agree with this strongly as a description of the state of AI today in general, but with one major reservation.  Not entirely all of the AI field is nothing but clever programming.  The AI programs that do the most impressive application tasks certainly are — because the efforts to build general learning machines are less than babies at the moment.

The key to moving up from the hypo/dia border into the diahuman range is imitation.  I’d guess that the state of the art would let us build a machine that would be able to watch someone sweeping a room and be able to sweep the same room with more or less the same series of strokes, being brittle to changes in the furniture positions and so forth. (Consider the kind of learning demonstrated in Ng’s helicopter.)  Building an AI that could watch lots of sweeping and then be able to figure out on its own how to sweep a new room — without having been programmed with any knowledge of sweeping ahead of time — is the kind of thing we need to advance the state of the art.

The difference is that in the second case the AI is inferring a model and a program from observations.  But this is what 21st century AI is (already) all about — typically, today, inferring statistical models from reams and reams of observations, but at least tackling the right problem.  The main thing that will determine the rate of advance is how much of the clever programming goes directly into end applications and how much goes into basic core learning.

Concept formation, model building, program inference, and so on are a quantum step harder than parameter tuning in a known ontology.  However, the math for that kind of thing is advancing, and the processing power to use techniques such as search and GAs is on its way in the next decade.  I don’t think we’ll have a superintelligent AI by 2020; indeed, I don’t think we’ll even have one that can educate itself by reading Wikipedia.  But I do think it’s at least a 50% chance we’ll have AIs that can learn something by a combination of imitation and careful verbal coaching.

Leave a comment

0
    0
    Your Cart
    Your cart is emptyReturn to Shop