Over the past ten to ﬁfteen years, research in computational linguistics has undergone a dramatic “paradigm shift.” Statistical learning methods that automatically acquire knowledge for language processing from empirical data have largely supplanted systems based on human knowledge engineering. The original success of statistical methods in speech recognition has been particularly inﬂuential in motivating the application of similar methods to almost all areas of natural language processing. Statistical methods are now the dominant approaches in syntactic analysis, word sense disambiguation, information extraction, and machine translation.
Nevertheless, there is precious little research in computational linguistics on learning for “deeper” semantic analysis. …
–Raymond Mooney, in a paper at the AAAI 2004 Spring Symposium on Language Learning.
Although we have talked mostly about robotics in terms of how AI has been advancing, it’s instructive to look at developments in the other subfields as well. Natural Language Processing is among the oldest. Turing’s classic paper from 1950 laid out the ability to converse in ordinary, unstructured text as an unequivocal test of the point a machine could be said to think.
Although much can and has been written on the validity of the Turing Test as specified, it is clearly true that a computer with the ability to converse fluently in written and spoken English would be enormously more useful than the computers we have today. I think it’s also reasonably clear that most people would assume that there was “somebody home,” i.e. begin to impute intelligence and a self to such a computer.
The paradigm shift in NLP has been a result of two things: the increasing willingness (and ability) of AI researchers to use statistics and other numerical methods from the scientist’s toolkit, and the increasing size and avalibility of databases and corpuses and the processing power to subject them to intense analysis. The amateur AIer today can go on the web and obtain for free enough data and programs to cobble together an NLP system better than anything that existed in 1990. The processing power in a high-end workstation is good enough for a reasonable amount of research, but a couple more orders of magnitude will help immensely.
What statistical methods have done, in essence, is replace the hand-written grammars that characterized classic-era AI NLP. These are probabilistic, trained on huge corpuses, and are considerably more robust in use than the old ones. On the other hand, they don’t reach up into the heights of semantics as well. I’d claim that there’s not a NLP system today that understands its entire vocabulary as well as SHRDLU did its. The reason is that for its tiny vocabulary, SHRDLU could have a hand-written piece of code for each concept, and thus have a real understanding, in some sense, of the concept. However, this will change over time. To begin with, people will simply write code for the most important concepts. People will come up with schemes to form new code for new concepts from fragments of old code and/or search methods like genetic programming.
Current leading-edge NLP systems (most of them proprietary, AFAIK) are surprisingly good at talking about whatever it is they actually know about, i.e. have a deep semantic model of, as long as you’re literal and prosaic (and expect them to be the same). I think it’s a toss-up whether automatic programming of semantics makes it to hypohuman border this decade — but AI with hand-coded semantics such as Siri seems likely to be ubiquitous, and competent, by 2020.