Replicating nanofactories redux

Replicating nanofactories redux

Over at Accelerating Future, Michael Anissimov continues the discussion about nanofactories. He says a number of reasonable things, but then mischaracterizes, or at least greatly oversimplifies, Foresight’s position on nanofactories and self-replicating machines in general:

The general implied position of the Foresight Institute appears to be, “we’ll figure these things out as we go, MNT should be developed as soon as people put up the funding for it, everything will pretty much be fine”.

Foresight was founded in 1986 to consider exactly these kinds of issues — that’s why it was called Foresight — and we’ve been the leader in the study of the implications of advanced nanotech ever since. In those years the public perception of what nanotech might be capable of has swung wildly from one extreme to another.

Here, for the record, is Foresight’s actual position on these questions. (None of this, by the way, should be taken as any kind of a point-for point response to Michael’s post, except for the general question of what Foresight’s position is.)

  • Nanotech and AI have the potential to have as transformative an effect on the human condition as fire, clothing, the wheel, houses, agriculture, writing, boats, science, or heat-powered machinery. It’s quite likely that this phase change will be well underway within the coming half-century.
  • The overall effect of these technologies has been, and for nanotech and AI will be, immensely more beneficial than harmful. There are, however, pitfalls associated with any such major change.
  • The value of having foresight is that one can steer towards the benefits and away from the pitfalls, or be prepared to use the new capabilities to ameliorate bad effects. However, in order to do this effectively, one must study the technology in depth, actually understanding what the capabilities really are.
  • It has always been Foresight’s position that prompt development of capabilities by a responsible mainstream is the best defense against their inevitable development by others.
  • Unjustified alarmism about poorly-studied but spectacular dangers of future technologies is irresponsible and counterproductive. The alarmist sits at the root of a memetic chain letter, reaping near-term benefits from publicity, but the overall effect is to slow and warp development of beneficial capabilities. This condemns the vast majority of humanity to needless poverty, disease, and death.

An example of a pitfall is the religious wars that racked Europe following the invention of the printing press and the subsequent expanding literacy in the population. Does this mean there should be no printing press and the serfs should remain illiterate? Surely not. Could the wars have been prevented or ameliorated? Who knows — but in order for that to have happened, the printing press would have to have been put into use even though the wars were foreseen. Too strong a concern by the religious “singleton” and the printing press would have been “relinquished.”

An example from nanotechnology is the “gray goo” question. Reasoning by analogy to bacteria, could experimental nanoreplicators accidentally be released and go munching through the biosphere? This was a legitimate concern at a time when the leading model for nanofabrication was vats of bacteria-like assemblers. But years of inquiry led to several new insights: Accidentally modifying an assembler to live in the wild would be like accidentally fixing a car to forage for tree sap. Free-floating assemblers weren’t close to the best manufacturing architecture anyway — others, such as nanofactories, would be much better and would obviate even the conceptual posibility of gray goo. Foresight was doing this 20 years ago.

But such insights — illuminating both the pitfalls and pathways to the benefits — can only be had by working out actual architectures: inventing the technology at a level detailed enough to make specific predictions about its capabilities. Exploratory engineering may look like unthinking gung-ho development from the outside but it is an essential part of foresight.

About the Author:


  1. JamesG May 4, 2009 at 7:06 am - Reply

    Assemblers may not be the best approach to nanotech, but a nanofab unit could make assemblers or something like them since they build things atom by atom, right? So a malicious entity could unleach a malicious assembler on purpose then? More or less, ‘grey goo’ and still a threat. I’m in favor of making nanotech of whatever sort, it’s just becoming obvious to me that people are not going to have complete freedom to do what they want with these…

  2. Anonymous May 4, 2009 at 8:39 pm - Reply

    Personally, I’m significantly more concerned about the short term economic effects of these new technologies. Why am I concerned? Because we hear only two people talking about the economic impacts (Robin Hanson and James Albus… and James Albus is a lunatic) and only the nutcase is offering a solution.

  3. J. Storrs Hall May 5, 2009 at 11:29 am - Reply

    Actually we had both Albus and Hanson speaking about economic impacts at the AGI-09 Workshop on the Future of AGI. At the highest level of abstraction, they both said the same thing: people had better own capital, because a quarter-century from now (give or take), it’s going to be getting pretty hard to earn a living from labor.

    You may consider Albus’ scheme to do that politically naive, but that doens’t make him a nutcase. Before retiring, he was one of the leading roboticists in the world.

  4. Anonymous May 6, 2009 at 5:57 pm - Reply

    Hello Josh and others, a few questions and comments about the above:
    Didn’t Robert Bradbury also advice people, if they could, to purchase a piece of land preferably with abundant sunlight, for the same basic reason? So would you suggest someone buying say a plot of Mojave Desert land if they could, like a few acres? Your assemblers or nanosystems could then turn sunlight and the sand and dirt into consumer goods for you. How would you deal with the lack of water?

    Secondly: A new book out called THE TRANSFORMERS and PHILOSOPHY has a whole chapter written by Josh Storrs Hall. Josh you did an EXCELLENT JOB! Thank you! I love how you describe some of the capabilities of robotics and nanotech and AI.

  5. Anonymous May 9, 2009 at 5:12 pm - Reply

    Actually, Messrs Hanson and Albus are not the only people to have considered various ramifications of these issues. Admittedly, the rest of us stumble along outside of the limelight, but we do contribute in our own meager fashion.

    I look forward to your next appearance of FFR Dr. Hall.

  6. Anonymous May 14, 2009 at 5:07 pm - Reply

    This big confuses me: nanofactories …. would obviate even the conceptual posibility of gray goo.

    I readily stipulate the following: (1) nanofactories will be massively more efficient, (2) the story about the car foraging for tree sap reasonably describes the engineering challeng of designing a gray-goo-bot, (3) the difficulty of desiging a GGbot means it could never be designed or built accidentally, (4) there’s a fair chance that even a perfectly designed GGbot wouldn’t find enough energy (or find it quickly enough) to outcompete the bacteria and insects and other natural enemies it would encounter.

    It’s still not impossible that some bad guy could undertake the design of a GGbot and could come up with something, if not universally destructive in the gray goo sense, at least destructive enough to make a lot of trouble. I don’t see how this theoretical possibility is affected in the slightest by the greater efficiency of nanofactories.

    Is the idea that more efficient nanofactories give you an overwhelming advantage in the arms race against the bad guy? Very likely true, but he is still a big threat if he can act for a sufficient span of time before you’re not aware of his activities.

    Am I missing something here?

Leave A Comment