From the Albany (OR) Democrat Herald:
Phone robots: Let’s all rebel
By Hasso Hering, Columnist | Posted: Saturday, November 7, 2009 11:45 pm
What this country needs – even more than a shorter baseball season so the World Series doesn’t go into November – is a popular uprising against the tyranny of telephone robots.
This is how those talking machines drive you up the wall.
You want some information from a company, but there is no local number. So, dreading what comes next, you dial the toll-free number in the book.
After the greeting and a burst of Spanish – which presumably means that if you prefer that language you should push numero uno or something – a machine asks you for your account number.
You don’t have one, of course. And while you’re thinking of what you might say to get to the next step, the machine gets impatient:
“I’m sorry, I didn’t get what you said. In order to proceed with this call, I need your account number.”You sputter something in response, but it’s not an account number.
The robot comes back wanting to know your phone number. This is something you can provide, and you do, grudgingly, knowing that it really won’t help.
Sure enough, the robot asks: “I don’t recognize this number in our records. Is this the phone number for the account you have with our company?”
No, you dummy, it’s not. It’s my own phone number.“I don’t have an account,” you say.
Robot: “I’m sorry, I didn’t understand. Is this the telephone number on the account? In order to proceed with your request, I need an account number or the telephone number for the account. If you do not have an account number or do not know it, say: I don’t know it.”
“I don’t know it,” you mumble, obediently.
Robot: “I’m sorry, I did not understand. …
This is, unfortunately, the kind of “robotic” robots that actually are taking over the world. And the problem is not that they’re too good, or too intelligent, or anything like that. Indeed, it’s just the opposite: the problem is that they’re incompetent. If Hering had gotten a polite, friendly, knowledgeable, and helpful agent on the phone, there wouldn’t have been much of a column.
On the other hand, it should be pretty clear to any business that they would be better off with polite, friendly, knowledgeable, and helpful robots. There’s a strong market pressure and money available for development (to the extent that there’s money available for the development of anything).
A call-center help-desk was one of the possibilities mentioned at the AGI roadmap for an intelligence test. The idea is that the system would be given a manual and some software (or other system) and a week (or whatever) to read and learn, and then be put on the phone and judged on how well it managed to help people who were having problems with the system.
The state of the art in phone-answering systems isn’t quite as bad as the humorous editorial above makes out, but it’s still not good enough to carry on a reasonable conversation even on the simple, constrained subjects that an automated receptionist should handle. I confidently expect this to change over the coming decade — but it remains a toss-up, in my opinion, whether we’ll have a system that can learn to be a competent receptionist, as opposed to having been laboriously hand-coded and trained to be one. And if we do, it’ll most likely have major chunks of general skills coded in — things like speaking and reading, for example.
But to the company that wants a roboreceptionist, it doesn’t matter where the skills came from — the company will decide between learned and coded skills on the basis of cost. So if I had a system that could do the learning, it would be worth as much as the development and training team. I would want to sell trained systems with skills, not learning systems — that would be like giving away my factory. (It will be interesting to see what happens when open-source IDEs get good enough to be said to be learning the program rather than being a pile of tools for a programmer.) And it seems unreasonable to think that at any level of technology, learning a skill would be as cheap as simply doing it once learned.
So it seems very likely that the technology of learning AI will develop, in early days at least, in a form of learning machines that create separate narrow AIs, instead of a more human-like learning paradigm. And it seems likely that a common origin of these learning systems will be AI development envirionments, which today are intended for very heavy human involvement and should simply become more and more automated over time. And of course these will be self-improving — the first thing everyone with a development environment does is use it to work on its own code — but again with lots of human input.
Let’s just see if we can’t just get to the point where I, as a software architect, can simply talk and wave my hands to my development system, which does all the low-level design and coding. Competently.