Over at Accelerating Future, Michael Anissimov continues the discussion about nanofactories. He says a number of reasonable things, but then mischaracterizes, or at least greatly oversimplifies, Foresight’s position on nanofactories and self-replicating machines in general:
The general implied position of the Foresight Institute appears to be, “we’ll figure these things out as we go, MNT should be developed as soon as people put up the funding for it, everything will pretty much be fine”.
Foresight was founded in 1986 to consider exactly these kinds of issues — that’s why it was called Foresight — and we’ve been the leader in the study of the implications of advanced nanotech ever since. In those years the public perception of what nanotech might be capable of has swung wildly from one extreme to another.
Here, for the record, is Foresight’s actual position on these questions. (None of this, by the way, should be taken as any kind of a point-for point response to Michael’s post, except for the general question of what Foresight’s position is.)
- Nanotech and AI have the potential to have as transformative an effect on the human condition as fire, clothing, the wheel, houses, agriculture, writing, boats, science, or heat-powered machinery. It’s quite likely that this phase change will be well underway within the coming half-century.
- The overall effect of these technologies has been, and for nanotech and AI will be, immensely more beneficial than harmful. There are, however, pitfalls associated with any such major change.
- The value of having foresight is that one can steer towards the benefits and away from the pitfalls, or be prepared to use the new capabilities to ameliorate bad effects. However, in order to do this effectively, one must study the technology in depth, actually understanding what the capabilities really are.
- It has always been Foresight’s position that prompt development of capabilities by a responsible mainstream is the best defense against their inevitable development by others.
- Unjustified alarmism about poorly-studied but spectacular dangers of future technologies is irresponsible and counterproductive. The alarmist sits at the root of a memetic chain letter, reaping near-term benefits from publicity, but the overall effect is to slow and warp development of beneficial capabilities. This condemns the vast majority of humanity to needless poverty, disease, and death.
An example of a pitfall is the religious wars that racked Europe following the invention of the printing press and the subsequent expanding literacy in the population. Does this mean there should be no printing press and the serfs should remain illiterate? Surely not. Could the wars have been prevented or ameliorated? Who knows — but in order for that to have happened, the printing press would have to have been put into use even though the wars were foreseen. Too strong a concern by the religious “singleton” and the printing press would have been “relinquished.”
An example from nanotechnology is the “gray goo” question. Reasoning by analogy to bacteria, could experimental nanoreplicators accidentally be released and go munching through the biosphere? This was a legitimate concern at a time when the leading model for nanofabrication was vats of bacteria-like assemblers. But years of inquiry led to several new insights: Accidentally modifying an assembler to live in the wild would be like accidentally fixing a car to forage for tree sap. Free-floating assemblers weren’t close to the best manufacturing architecture anyway — others, such as nanofactories, would be much better and would obviate even the conceptual posibility of gray goo. Foresight was doing this 20 years ago.
But such insights — illuminating both the pitfalls and pathways to the benefits — can only be had by working out actual architectures: inventing the technology at a level detailed enough to make specific predictions about its capabilities. Exploratory engineering may look like unthinking gung-ho development from the outside but it is an essential part of foresight.