My Robo Habilis post was picked up on by Michael Anissimov who wrote:
(me:) It seems to me that one obvious way to ameliorate the impact of the AI/robotics revolution in the economic world, then, is simple: build robots whose cognitive architectures are enough different from humans that their relative skillfullness at various tasks will differ from ours. Then, even after they are actually better at everything than we are, the law of comparative advantage will still hold.
Boom, friendliness problem solved. Build robots with different cognitive architectures than us, and they will be forced to keep us around, due to Ricardo’s law of comparative advantage. Sounds wildly naive to me.
All I can say is thanks for noticing I’ve solved the most important problem of the 21st century with a single paragraph! I’m confidently expecting my Nobel Peace Prize.
But seriously, I would like to argue that the concept of the “friendliness problem” is a dangerous misreading of the real difficulties and problems we will face as a result of the development of artificial intellegence over the next few decades. It seems to me that one could characterize the people working on “Friendly AI” as essentially trying to redo moral philosophy, from scratch, and get it right this time. There’s nothing wrong with this; moral philosophy is a valuable intellectual tradition and worthwhile human activity. But the notion that the whole business, with the addition of the new insight that there can be intelligent machines as well as humans among the class of moral agents, could be solved in any useful sense, just strikes me as silly. Indeed, the new insight makes moral philosophy a lot harder, rather than bringing it any closer to any kind of closure.
Instead let’s look at the kind of problems we’re really going to face. There is not — I guarantee it — going to be any single overarching solution to them; there will be a host of minor things we can do to ameliorate the problems as they arise, and we’ll just have to keep coming up with them as problems arise.
We know what it will be like should we manage to invent and implement a giant, powerful decision-making system that takes over the world. We know because we’ve already done it. Some people have observed this system in action and seem to think that it has a “friendliness problem”:
We’re Governed by Callous Children
…
When I see those in government, both locally and in Washington, spend and tax and come up each day with new ways to spend and tax—health care, cap and trade, etc.—I think: Why aren’t they worried about the impact of what they’re doing? Why do they think America is so strong it can take endless abuse?
I think I know part of the answer. It is that they’ve never seen things go dark. They came of age during the great abundance, circa 1980-2008 (or 1950-2008, take your pick), and they don’t have the habit of worry. They talk about their “concerns”—they’re big on that word. But they’re not really concerned. They think America is the goose that lays the golden egg. Why not? She laid it in their laps. She laid it in grandpa’s lap.
They don’t feel anxious, because they never had anything to be anxious about.
Peggy Noonan thinks the government is screwing us up because it’s made of people who don’t care. But I beg to differ. There’s a classic fallacy in the philosophy of mind that shows up in places ranging from Leibniz’ story of the “magnified mill” to Searle’s Chinese Room, which is that for a system to have some property, the property must be present among the parts. This is just as false for caring as it is for understanding or consciousness. In fact the existing system is a perfect example, although in reverse — it’s composed of people who do care, but they interact in a structure that results in an evil bureaucracy.
Instead, what’s happened is that we made a blunder in designing the system that is exactly equivalent to a favorite example of Eliezer Yudkowsky: instead of building a paperclip-maximizing machine, we built a vote-maximizing machine.
I claim that the problem is much more productively looked at from another point of view: the system as a whole is incompetent. It doesn’t do what it was built to do (“… promote the general welfare, secure the blessings of liberty …”). The designers simply assumed a vote-maximizer would do the things they wanted, but they were wrong. Similarly, no human wants the universe to be converted into paperclips, so if he built a machine with that goal, he would have designed incompetently. I claim we should be spending our time on is figuring out how to build competent AI.
First principle of competent AI design: Build a machine that understands what you want. The paperclip maximizer is a study in amazing contrasts — presumably an intelligence powerful enough to take over the world would be capable of understanding human motivations even better than we do, so as to manipulate us effectively. Yet it’s built with a complete cognitive deficit of appropriate motivations, goals, and values for itself. Incompetent.
Second principle: build machines that know their limitations. This basically means that it should confine its activities to those areas where it does understand the effects of its actions.
But in order to do that, we first have to be able to build a machine that can actually understand something — anything — in the full human-level meaning of understanding. And that is the necessary first step to a future of useful and beneficial AI, and it’s what anyone concerned about such things should be working on.