The Feynman path is summarized in a series of Nanodot posts:
The idea is to start from the macroscale machining and fabrication and move to the nanoscale without ever losing the general fabrication and manipulation ability.
* there still seems to be a “giggle factor” associated with the notion of a compact, macroscale, self-replicating machine using standard fabrication and assembly techniques
* In standard technology a factory is much bigger and more complex than whatever it makes
* KSRMs (kinematic self-replicating machines) are difficult
* KSRMs defy standard design methodologies
A major step toward the Feynman Path would be to work out a scalable architecture for a workable KSRM that actually closed the circle all the way. A reasonable start would be a deposition-based fab machine, a multi-axis mill for surface tolerance inprovement, and a pair of waldoes. See how close you could get to replication with that, and iterate.
A full machining and manipulation capability at the microscale would allow lapping, polishing, and other surface improvement techniques, which photolithography-based MEMS does not.
The bottom-up folks are not nearly as close to real nanotech as the impression the nano-hype news gives.
Top-down and bottom-up can meet in the middle. When nanoscientists succeed in making an atomically precise nanogear, for example, it means that when the Feynman Path machines get to that scale, they can take the gear off the shelf instead of having to fabricate it the hard way. In fact it seems likely that the bottom-up approaches will likely be the way parts are made and the top-down the way they’re put together.
I’ll stick my neck out and say at a wild guess that if only bottom-up approaches are pursued, we have 20 years to wait for real nanotech; but if the Feynman Path is actively pursued as well, it could be cut to 10.
1. Is it in fact possible to build a compact self-replicating machine using macroscopic parts fabricators and manipulators? We know that a non-compact one is possible — the world industrial infrastructure can replicate itself — and we know that a compact microscopic replicator can work, e.g. a bacterium. But the bacterium uses diffusive transport, associative recognition of parts by shape-based docking, and other complexity-reducing techniques that are not available at the macroscale.
2. Not quite the same question: how much cheating can we get away with? In KSRM theory, it’s common to specify an environment for the machine to replicate in and some “vitamins,” bits that the machine can’t make and have to be provided, just as our bodies can’t synthesize some of the molecules we need and must get them pre-made in our diets
1. Design a scalable, remotely-operated manufacturing and manipulation workstation capable of replicating itself anywhere from its own scale to one-quarter relative scale. As noted before, the design is allowed to take advantage of any “vitamins” or other inputs available at the scales they are needed.
2. Implement the architecture at macroscale to test, debug and verify the design. This would be a physical implentation, probably in plastic or similar materials, at desktop scale, and would include operator controls that would not have to be replicated.
3. Identify phase changes and potential roadblocks in the scaling pathway and determine scaling steps. Verify scalability of the architecture through these points in simulation. Example: electromagnetic to electrostatic motors. It would be perfectly legitimate to use externally supplied coils above a certain scale if they were available, and shift to electrostatic actuation, which would involve only conducting plates, below that scale, and never require the system to be able to wind coils.
4. Identify the smallest scale, using best available fabrication and assembly technology, at which the target architecture can currently be built.
5. Write up a detailed, actionable roadmap to the desired fabrication and manipulation techniques at the nanoscale.
In 1994 Japanese researchers at Nippondenso Co. Ltd. fabricated a 1/1000th-scale working electric car. As small as a grain of rice, the micro-car was a 1/1000-scale replica of the Toyota Motor Corp’s first automobile, the 1936 Model AA sedan.
* It seems very likely that the motors we use will be electrostatic steppers
* A particularly important aspect of the Feynman Path is that not much more than halfway down to molecular scale in part size, we already hit atomic scale in tolerance
* A Feynman Path workcell actually avoids the problem that a standard solid-freeform-fab (SFF) design has with building something its own size, because it’s building a copy that’s smaller than itself
* electrodeposition (and electro-removal, as in EDM) and electroplating will be useful
10. The Feynman Path initiative is a specific, concrete proposal — but more, it’s one that can be done in an open-source way, for at least the first, roadmap.
There’s a fundamental similarity between a Feynman Path machine (FPm) and a RepRap, obviously, in their orientation to self-replication. This includes the fact that both schemes require a human to be actively involved in the replication process, in the FPm by teleoperation. But there are some fundamental differences:
Attitude to cost: a RepRap is intended to be a means to cheap manufacturing, so it’s oriented to using the least expensive materials available. An FPm has much less concern about that: each successive machine in the series uses less than 2% the material of the previous one. It would be perfectly reasonable to design an FPm that had to carve all its parts out of solid diamond, once past the millimeter scale, for example. The goal is to understand principles, not supplant the economy (at least until the nanoscale is reached).
Attitude to closure: RepRap assumes human assembly labor, but an FPm has to provide its own manipulating capabilities. RepRap allows exogenous parts that are widely available and inexpensive; an FPm allows parts that are available at all scales.
Assembly time vs accuracy: As a consumer-goods production machine, RepRap has at least some concern for how long it takes to do its job. An FPm has much less concern about time, but much more about accuracy, since it has to improve its product’s tolerance over its own by a substantial factor.
(NB: this summary was originally written by Brian Wang, to whom great thanks!)