- Feynman’s Path to Nanotech (part 1)
- Feynman’s Path to Nanotech (part 2)
- Feynman’s Path to Nanotech (part 3)
- Feynman’s Path to Nanotech (part 4)
- Feynman's Path to Nanotech (part 5)
- Feynman’s Path to Nanotech (part 6)
- Feynman's Path to Nanotech (part 7)
- Feynman’s Path to Nanotech (part 8)
- Feynman’s Path to Nanotech (part 9)
- Feynman’s Path to Nanotech (part 10)
So why hasn’t the Feynman Path been attempted, or at least studied and analyzed? One possible reason is that there still seems to be a “giggle factor” associated with the notion of a compact, macroscale, self-replicating machine using standard fabrication and assembly techniques. Although studied in the abstract since von Neumann, and in physical systems in biology over roughly the same period, kinematic self-replicating machines remain poorly characterized as a field of engineering.
One reason for the giggle factor is that we have a strong instinct, well founded in experience, that in standard technology a factory is much bigger and more complex than whatever it makes. This instinct is neatly captured in this video of a Lego car factory which is literally thousands of times as complex as the cars it assembles.
I can’t repeat the many times I’ve been reassured that self-replication was easy. After all, John Von Neumann, way back in the 40’s, clearly defined the logic of self-replication and all we have to do is implement his “blueprint”. His automata theory anticipated the later research on biological reproduction. In the early 1980’s, there was a flurry of activity — especially the Robert Freitas papers and the NASA summer study of 1980 (Advanced Automation for Space Missions, held at University of Santa Clara, NASA CP2255) — which strongly advocated that NASA should embark on a new technological strategy, embracing automation, robotics and self-replication. In 1985, after Tihamer Toth-Fejel showed him the 1980 NASA summer study, Gregg Maryniak wrote an excellent article on SRSs in the SSI Update. Recently two rather optimistic books on the emerging field of Artificial Life were published: Steven Levy’s Artificial Life and Claus Emmeche’s The Garden in the Machine. Within this past year, Lackner and Wendt, obviously inspired by chapter 18 of Freeman Dyson’s Disturbing the Universe, published the paper with the exciting title, “Exponential Growth of Large Self-Reproducing Machine Systems”.
Furthermore, it seems apparent to me that of all the processes that we observe in the biological world, self-replication is relatively easy. Despite the “inevitability reassurances” of Stuart Kauffman and Christian de Duve, I view the origin (or origins if you wish) of life as incredibly hard. If autocatalysis and other spontaneous appearances of great complexity are so inevitable, why can we not observe them in nature or in the laboratory? Biological self-replication on the other hand can be observed any time we choose to look. Other processes such as homeostasis, morphogenesis, epigenesis, endosymbiosis, evolution, cognition, consciousness and conscience are, in my opinion, also far harder than self-replication.
And finally, engineering designs can learn from biology, but certain practical simplifications are possible (and desirable to reduce cost and complexity) as we try to apply SRS to space colonization. For example, we need not go to the trouble of incorporating the blueprint for the entire system into every subsystem. We need not assemble the system in a tortuous series of incremental developments which recapitulate earlier design generations. We need not design for inheritable mutations. Indeed, as we concentrate on the colonization of the solar system, we can practically maintain all genetic control of all space-borne self-replicating systems by human scientists and engineers on Earth. Essentially, as Ralph Merkle puts it, we can “broadcast” the genetic information from Earth rather than encode it within the generations of SRS.
So if self-replication is so easy, where are all the SRSs?
On a personal note, I was in a discussion of nanotech in a long-view futurist setting about a decade ago (see p. 5) which also included Mark Reed of Yale, a top academic nanotech researcher. I had finished describing the possibilities of atomically-precise motors, gears, shafts, pulleys, and the rest, and mentioned that I was quite certain we’d have them sometime in the 21st century. The moderator was somewhat incredulous, and asked Reed if he believed that, to which Reed replied (in my certainly not verbatim recollection) that of course we would, but that they wouldn’t be able to self-replicate.
One reason for this skepticism, and indeed of the difficulty of KSRMs (kinematic self-replicating machines, to distinguish them from cellular automata or other self-replicating models) in the first place, is that KSRMs defy standard design methodologies. In standard top-down design, a critical part of the specifications for a machine is its capabilities; to build a manufacturing system, for example, it is useful to know what it must manufacture. For a self-replicating system, however, the product is the system itself, the specifics of which are not known until the design is completed. Therefore the design process more closely resembles the solution of a system of equations by iteration or relaxation, than the one-pass evaluation of a closed formula.
The majority of published KSRM designs have either required complex subsystems as “raw material,” resulting in greatly over-simplified construction capabilities, or have foundered on extreme system complexity because of a requirement to use naturally occuring raw material inputs (e.g. the 1980 NASA study). The prevalence of these extremes has led to a perception that there was no practical middle ground between them.
It may be, however, that there are the beginnings of a shift on the subject. One reason is RepRap, which demonstrates that the capabilities of a solid freeform fabricator could actually be useful in a self-replicating system. (RepRap is about a quarter of a self-replicating machine — it makes about half its own parts, and making its parts is about half the problem, the other half being assembling them. But it does represent something of a conceptual breakthrough with respect to the giggle factor.)
It seems clear that a major step toward the Feynman Path would be to work out a scalable architecture for a workable KSRM that actually closed the circle all the way. A reasonable start would be a deposition-based fab machine, a multi-axis mill for surface tolerance inprovement, and a pair of waldoes. See how close you could get to replication with that, and iterate.
There are plenty of other problems, but this would at least give us a framework to address them in.