The first AI blog was written by a major, highly respected figure in the field. It consisted, as a blog should, of a series of short essays on various subjects relating to the central topic. It appeared in the mid-80s, just as the ARPAnet was transforming over into the internet.
The only little thing I forgot to mention was that it didn’t actually appear in blog form, which of course hadn’t been invented. The WWW didn’t appear until the next decade. It appeared in book form, albeit a somewhat unusual one since it was, as mentioned, a series of short essays, one to a page. It was, of course, Marvin Minsky’s Society of Mind.
Of course, you’re reading a blog about AI right now. The difference is that that was Minsky, and this is merely me. If you haven’t read SOM, put down your computer and go read it now.
Good. You’re back. Here’s why SoM is relevant to our subject of whether and how soon AI is possible:
It remains a curious fact that the AI community has, for the most part, not pursued Society of Mind-like theories. It is likely that that Minsky’s framework was simply ahead of its time, in the sense that in the 1980s and 1990s, there were few AI researchers who could comfortably conceive of the full scope of issues Minsky discussed—including learning, reasoning, language, perception, action, representation, and so forth. Instead the field has shattered into dozens of subfields populated by researchers with very different goals and who speak very different technical languages. But as the field matures, the population of AI researchers with broad perspectives will surely increase, and we hope that they will choose to revisit the Society of Mind theory with a fresh eye. (Push Singh — further quotes from the same source)
In other words, here’s a comprehensive theory of what an AI architecture ought to look like that is the summary of the lifework of one of the founders and leaders of the field, and yet no one has seriously tried to implement it. (When I say serious, I mean put as much effort into it as has gone into, say, Grand Theft Auto.) (There has been a serious effort to implement the theoretical approach of the CMU wing of classical AI, namely SOAR.)
Part of the reason for this is that SoM is in some sense only half a theory:
Minsky sees the mind as a vast diversity of cognitive processes each specialized to perform some type of function, such as expecting, predicting, repairing, remembering, revising, debugging, acting, comparing, generalizing, exemplifying, analogizing, simplifying, and many other such ‘ways of thinking’. There is nothing especially common or uniform about these functions; each agent can be based on a different type of process with its own distinct kinds of purposes, languages for describing things, ways of representing knowledge, methods for producing inferences, and so forth.
To get a handle on this diversity, Minsky adopts a language that is rather neutral about the internal composition of cognitive processes. He introduces the term ‘agent’ to describe any component of a cognitive process that is simple enough to understand, and the term ‘agency’ to describe societies of such agents that together performs functions more complex than any single agent could.
… but SoM doesn’t have a lot to say about what the individual functions are or how implemented, outside a few examples. Since AI has for the past few decades concentrated on immediate results, most of the work has been on parts of the problem that could be described as stuff that would be inside a single agent, or at most an agency.
A good example of this happened a few years ago with the winning of the DARPA Grand Challenge and thus the development of the self-driving car. A few months after that happened, I was having a conversation with an AI researcher at a conference. I maintained that the difference between the results of the first and second races — nobody got more than a mile or so, and then a couple years later several cars finished the whole 130-mile course — represented real progress. He pooh-poohed the idea. All the techniques used in the cars had been previously known and published, he said. All that had happened was that they had been integrated together into a working system.
I think this attitude goes a long way to explaining the lack of work on SoM and other overall cognitive architecture theories. But as I reasoned previously:
The difference was, the Wright brothers knew an extra Good Trick, which was how to control the plane in the air once it was flying.
So to develop a working AI, we need the power, which we don’t think is going to be a problem. We need the lift, which is the kind of techniques found in narrow AIs and discussed above. And finally we need the control.
SoM represents a theory of how the control might work. Where does that leave us? Can we simply take Minsky’s books and papers and build an AI with all the existing narrow skill programs acting as agents? Hardly. There’s a lot of work to be done, and probably several new Good Tricks left to be found.
The bottom line, though, is that we are not facing a blank wall. We are facing a corridor with a sign reading “This way to the egress.” Indeed we are partway down the corridor already; robotics and self-driving cars have required the development of integrated cognitive architectures along the lines that will probably lead to success. Note that Brooks’ subsumption architecture had a lot in common with SoM.
So there is at least a case to be made that we are into the home stretch. Of course that’s where the race really heats up and all the excitement happens…