My Robo Habilis post was picked up on by Michael Anissimov who wrote:
(me:) It seems to me that one obvious way to ameliorate the impact of the AI/robotics revolution in the economic world, then, is simple: build robots whose cognitive architectures are enough different from humans that their relative skillfullness at various tasks will differ from ours. Then, even after they are actually better at everything than we are, the law of comparative advantage will still hold.
Boom, friendliness problem solved. Build robots with different cognitive architectures than us, and they will be forced to keep us around, due to Ricardoโs law of comparative advantage. Sounds wildly naive to me.
All I can say is thanks for noticing Iโve solved the most important problem of the 21st century with a single paragraph! Iโm confidently expecting my Nobel Peace Prize.
But seriously, I would like to argue that the concept of the โfriendliness problemโ is a dangerous misreading of the real difficulties and problems we will face as a result of the development of artificial intellegence over the next few decades. It seems to me that one could characterize the people working on โFriendly AIโ as essentially trying to redo moral philosophy, from scratch, and get it right this time. Thereโs nothing wrong with this; moral philosophy is a valuable intellectual tradition and worthwhile human activity. But the notion that the whole business, with the addition of the new insight that there can be intelligent machines as well as humans among the class of moral agents, could be solved in any useful sense, just strikes me as silly. Indeed, the new insight makes moral philosophy a lot harder, rather than bringing it any closer to any kind of closure.
Instead letโs look at the kind of problems weโre really going to face. There is not โ I guarantee it โ going to be any single overarching solution to them; there will be a host of minor things we can do to ameliorate the problems as they arise, and weโll just have to keep coming up with them as problems arise.
We know what it will be like should we manage to invent and implement a giant, powerful decision-making system that takes over the world. We know because weโve already done it. Some people have observed this system in action and seem to think that it has a โfriendliness problemโ:
Weโre Governed by Callous Children
โฆ
When I see those in government, both locally and in Washington, spend and tax and come up each day with new ways to spend and taxโhealth care, cap and trade, etc.โI think: Why arenโt they worried about the impact of what theyโre doing? Why do they think America is so strong it can take endless abuse?
I think I know part of the answer. It is that theyโve never seen things go dark. They came of age during the great abundance, circa 1980-2008 (or 1950-2008, take your pick), and they donโt have the habit of worry. They talk about their โconcernsโโtheyโre big on that word. But theyโre not really concerned. They think America is the goose that lays the golden egg. Why not? She laid it in their laps. She laid it in grandpaโs lap.
They donโt feel anxious, because they never had anything to be anxious about.
Peggy Noonan thinks the government is screwing us up because itโs made of people who donโt care. But I beg to differ. Thereโs a classic fallacy in the philosophy of mind that shows up in places ranging from Leibnizโ story of the โmagnified millโ to Searleโs Chinese Room, which is that for a system to have some property, the property must be present among the parts. This is just as false for caring as it is for understanding or consciousness. In fact the existing system is a perfect example, although in reverse โ itโs composed of people who do care, but they interact in a structure that results in an evil bureaucracy.
Instead, whatโs happened is that we made a blunder in designing the system that is exactly equivalent to a favorite example of Eliezer Yudkowsky: instead of building a paperclip-maximizing machine, we built a vote-maximizing machine.
I claim that the problem is much more productively looked at from another point of view: the system as a whole is incompetent. It doesnโt do what it was built to do (โโฆ promote the general welfare, secure the blessings of liberty โฆโ). The designers simply assumed a vote-maximizer would do the things they wanted, but they were wrong. Similarly, no human wants the universe to be converted into paperclips, so if he built a machine with that goal, he would have designed incompetently. I claim we should be spending our time on is figuring out how to build competent AI.
First principle of competent AI design: Build a machine that understands what you want. The paperclip maximizer is a study in amazing contrasts โ presumably an intelligence powerful enough to take over the world would be capable of understanding human motivations even better than we do, so as to manipulate us effectively. Yet itโs built with a complete cognitive deficit of appropriate motivations, goals, and values for itself. Incompetent.
Second principle: build machines that know their limitations. This basically means that it should confine its activities to those areas where it does understand the effects of its actions.
But in order to do that, we first have to be able to build a machine that can actually understand something โ anything โ in the full human-level meaning of understanding. And that is the necessary first step to a future of useful and beneficial AI, and itโs what anyone concerned about such things should be working on.