One of the species of early hominids is named Homo habilis, meaning “handy man,” after their significant advancement in tool use over previous hominids. One of the goals of the AGI Roadmap is to chart paths to full human intelligence, and one of the paths might follow the one that evolution took. The Wozniak Test, i.e. being able to make coffee in any randomly-chosen home, is a case of tool use competence. It is a special case of what we might call the Nilsson Test, as outlined in a paper in 2005 by Nils Nilsson, one of the leading figures in AI:
Machines exhibiting true human-level intelligence should be able to do many of the things humans are able to do. Among these activities are the tasks or “jobs” at which people are employed. I suggest we replace the Turing test by something I will call the “employment test.” To pass the employment test, AI programs must be able to perform the jobs ordinarily performed by humans. Progress toward human-level AI could then be measured by the fraction of these jobs that can be acceptably performed by machines.
Let me be explicit about the kinds of jobs I have in mind. Consider, for example, a list of job classiﬁcations from “America’s Job Bank.” A
sample of some of them is given in ﬁgure 1:
Meeting and Convention Planner
Maid and Housekeeping Cleaner
Procurement and Sales Engineer
Farm, Greenhouse, Nursery Worker
Home Health Aide
Small Engine Repairer
Tour Guide and Escort
Engine and Other Machine Assembler
Marriage and Family Counselor
Hand Packer and Packager
Just as objections have been raised to the Turing test, I can anticipate objections to this new, perhaps more stringent, test. Some of my AI colleagues, even those who strive for human-level AI, might say “the employment test is far too difﬁcult—we’ll never be able to automate all of
those jobs!” To them, I can only reply “Just what do you think human-level AI means? After all, humans do all of those things.”
Now some of those jobs require specialized training and years of experience, while some of them are entry-level, accessible immediately to the average human. Most are somewhere in between. Note that “Maid and housekeeping cleaner” is in itself a superset of the Wozniak Test.
The ability of an AGI (= human-level AI) to do most or all of the jobs humans do is cause for a certain amount of concern. This brings us to a recent post by Robin Hanson:
Yes, techies agree on the long term plausibility of machines doing almost all jobs at a cost below human subsistence wages, thereby gaining almost all income, while economists ignore this scenario. …
Economists should listen more to techies on what techs will be feasible at what costs, but techies should also listen more to economists on the social implications of tech costs. Alas, just as economists prefer to rely on their intuitive folk tech forecasts, techies prefer to rely instead on their intuitive folk economics. …
The standard views of techies about what techs will be feasible might be wrong, and the standard views of economists of how to forecast tech consequences might be wrong. And it is fine for contrarians to try to persuade specialists they are in error, though contrarians would be wise to at least understand the standard view before trying to overturn it. But surely what the world needs first and foremost is to see and take seriously the simple combination of the standard views on such important topics.
One of the standard economic laws that applies in this case is Ricardo’s Law of Comparative Advantage. It states basically that it is generally to the advantage of parties of differing productivities to trade. In particular, the counter-intuitive part, it is to the advantage of the more productive party (e.g. the machines) to trade with the less productive (us, in the robot economy scenario). The exception is where the abilities (productivities across goods) are in the same exact proportions, leaving the parties nothing to specialize in.
It seems to me that one obvious way to ameliorate the impact of the AI/robotics revolution in the economic world, then, is simple: build robots whose cognitive architectures are enough different from humans that their relative skillfullness at various tasks will differ from ours. Then, even after they are actually better at everything than we are, the law of comparative advantage will still hold.