Over at Accelerating Future, Michael Anissimov has a post about self-replication in which he seems to find it remarkable that Foresight, among others, can view a world containing mechanical replicators with aplomb:
What is remarkable are those that seem to argue, like Ray Kurzweil, the Foresight Institute, and the Center for Responsible Nanotechnology, that humanity is inherently capable of managing universal self-replicating constructors without a near-certain likelihood of disaster.
From this he jumps with very few intervening arguments (“there are terrorists out there”) to a conclusion that we need a benevolent world dictatorship (“singleton”), which might need to be a superhuman self-improving AI. This seems a wildly illogical leap, but surprisingly appears to be almost an article of faith in certain parts of the singularitarian community and Washington, DC. Let us examine the usually unstated assumptions behind it:
-
Humanity can’t manage self-replicating universal constructors: We’ve been managing self-replicating universal constructors for tens of thousands of years, from elephants to yeast. What’s more, these are replictors that can operate in the wild. The design process, e.g. to turn a wolf into a Pekingese, takes longer but is much more intuitive to the average human.
If you’re worried about high-tech terrorists, worry about genetically engineered swine flu or other naturally-reproducing agents. If there are terrorists out there who are so technically sophisticated as to be a threat with MNT, at best guess still 20 years away for the leading mainstream labs, why aren’t they doing this? Even terrorist Berkeley professors only make letterbombs. - Once the leading mainstream labs produce self-replicating universal constructors, they are hardly going to hand them out by the billions for people to make shoes with. As Eric Drexler recently pointed out, specialized mill-style machinery is considerably more efficient than universal constructors at actually making stuff. My analysis of this point is that the difference is months for universal constructors vs milliseconds for specialized mills. Nobody is going to want universal constructors except for research.
- Note that a really universal constructor at the molecular level would, even under current law, require a bushel of different licenses to operate — one for each of the regulated substances it was capable of making. Sony is not going to be selling these things on the streets of Mumbai.
- Anyway, there already is a “singleton” — the US government. It has clearly demonstrated a willingness to act to prevent even nuisance-level WMD by actors outside the currently-accepted group. (By nuisance-level I mean ones which pose no serious threat to topple the US from its dominant military position.) The notion of producing, from scratch, an entity, AGI or whatever, that would not only seriously threaten US dominance but depose it without a struggle seems particularly divorced from reality. (Note that the US military is the leading funder and user of AI research and always has been.)
- It seems to me that if you can make a self-improving, superhuman AGI capable of taking over the world, you could probably make a specialized AI capable of running one desktop fab box. Its primary purpose is to help the 99.999% of users who are legitimate to produce safe and useful products. It doesn’t even have to outsmart the terrorists by itself — it is part of a world-wide online community of other AIs and human experts with the same basic goals.
The bottom line is that consumer-level desktop nanofactories are really a non-problem. That’s not to say that national- (or even major university-) level research labs could not be a threat, but then they already are, on the biotech side, and the same kinds of safeguards we have there, and more, can be applied to leading-edge nanotech research.