The common definition of “nanofactory” is a desktop, user-friendly system capable of building macroscale products using positional placement of individual atoms. Dr. Hall appears like he may (?) be using the term to describe “any nanomachine that makes another nanomachine”, but reading the writings of the Center for Responsible Nanotechnology (CRN) for about five years, and seeing them use the term “nanofactories” thousands of times to refer to advanced desktop systems, not nanomachines-building-nanomachines in general, I am specifically referring to the former.
A pretty widely-accepted definition of nanofactory is at Merkle and Freitas’ Nanofactory Collaboration site:
The nanofactory is a proposed compact molecular manufacturing system, possibly small enough to sit on a desktop, that could build a diverse selection of large-scale molecularly precise diamondoid products. The nanofactory is potentially a high quality, extremely low cost, and very flexible manufacturing system.
My own definition, in Nanofuture, (look in the index) is for something more broadly capable than diamondoid; I refer to it as a “synthesizer” in most places.
Other places the word originally appeared and/or became well-known are in Chris Phoenix’s nanofactory paper and Drexler and Burch’s visualization. And, indeed, it is generally used to mean a desktop manufacturing system.
But it’s likely that the term will become broadened, as happened to “nanotechnology” itself. (Parenthetically, it’s owned as a trademark by this company.) So I think that history shows that pretty much any desktop manufacturing system will be called a nanofactory, and we purists will be left arguing that there ought to be something in there about mechanosynthesis and atomic precision to the few who will listen.
From the outside, the line from here to nanofactories goes through 3D printers. To the user, a nanofactory as described in the above technical discussions is just a 3D printer that can produce a wider variety of products than the ones now available. My guess at the point they start being called nanofactories is when the process includes nanoscale printing (as in embedded circuitry or surface nantennas for color effects and photovoltaics. Which can be done now — so it shouldn’t be too long before someone includes it in a solid-freeform-fab process.
So: why should we expect a sudden jump in 3D printer capabilities? We shouldn’t. They will continue getting cheaper for the same capabilities, and more capable for the same price; but on the same smooth growth curve we’ve been seeing all along.
On the other end of the scale, a billion-dollar factory will always be able to out-produce a million-dollar factory, and that will always outproduce a thousand-dollar countertop machine. The fact that the workstation I’m using to write this essay could out-calculate all the Cray-1s ever built doesn’t mean that they quit building supercomputers: it means that for the same money they build unimaginably monsterific humongulated gigantoid supercomputers. The same will be true of factories.
Long before tabletop nanofactories can produce other tabletop nanofactories, billion-dollar industrial plants will be able to do so, and nanofactories will be as common as laser printers. These nanofactories will be able to make “almost anything” but not programmable mechanosynthetic equipment. Simple specialized production mechanisms are cheaper and much faster at any stage of the production chain. This is (very) basic economics — read Adam Smith on the manufacture of pins in the Wealth of Nations.
In this paper I point out that if a more complex design is available, it is always more efficient to use a simple replicator to build a more complex one than to simply continue replicating the simple system. If you have a nanofactory that can make another nanofactory, use the two to make a new system twice as big and complex as your first-generation model. By being bigger it can have more specialized sub-production lines and thus higher total throughput than your two generation-ones could. Use it to make another gen-two and use them to make a gen-3, and so forth. The total size grows exponentially but the generation time decreases exponentially as well. Which means that system size and productivity grows not exponentially, but asymptotic to a vertical line at some fixed time. A true singularity. (I called it Zeno’s Factory, but we could just as reasonably call it an unimaginably monsterific humongulated gigantoid superfactory.)
Of course, it depends on availability of resources, the speed of light, and a few other things — but the most important constraint is the availability of designs for the successive systems. But based on a ballpark estimate of the constraints, I’d guess that once you got to the level of complexity of the nanofactory that Michael describes:
A nanofactory would be a very complicated, “huge” thing. The Center for Responsible Nanotechnology compares the complexity of a molecular assembler to that of a Space Shuttle. I think the analogy would be apt for a nanofactory as well. We are talking about a miniature factory with more moving parts and individual computers than a typical 100 million-dollar modern factory today.
… you’re within a month of replacing the entire infrastructure of the Earth, every last farmer’s hut and the plants and animals grown for food as well as the cars, trucks, roads, and cities, with one vast, integrated machine. Luxury apartment, robot servants, personal aircraft, you name it, for everyone (and all still a tiny fraction of the capabilities of the overall machine). Ask for anything, and it will simply ooze out of the nearest wall, which will of course be a solid slab of productive nanomachinery (or Utility Fog). To recycle anything, just drop it on the floor.
The notion of a separate countertop factory in this world seems quaint.
So: for virtually the entire period that anyone actually uses anything called a nanofactory, they will be limited and (relatively) expensive, compared to the post-nanofactory world.