Engineering and analysis in the field of SRMs is unusual in many ways. Eric Drexler has posted a paper about differences in evolutionary capacity in mechanical and biological systems that’s worth a look.
Purely coincidentally, we at Foresight have been discussing self-replication in the context of the Feynman Path and I came up with an example that shows just how counter-intuitive self-replication, if you try to view it as a capability, can be.
Self-replication is a poor criterion to use to judge risk, either of autonomous runaway or hijackability. Consider, for example, two versions of the Drexler/Burch nanofactory:
1) as shown: the input is pressurized cannisters of fairly pure acetylene and possibly other refined chemical feedstocks.
2) the input is cassettes of nanoblocks, as output by the next-to-last stage in (1).
Now I claim that isn’t too hard to make (2) self-replicating. All it does is slap nanoblocks together in the right patterns; maybe 10% of the total functionality of (1). And it’s a lot more likely you can design a machine that does that entirely out of nanoblocks. Bingo, a self-replicator.
On the other hand, it’s quite difficult to make a self-replicating version of (1). From the lowest, mechanosynthetic levels of (1), it’s a hardwired, cast-in concrete gadget that builds nanoblocks. To build all the gadgetry in (1) as well, it’d take probably 100 times as much mechanism.
Now to us, (1) is the much more capable machine. After all, look at all it’s doing. But to the user, (2) is much more capable. Both machines require the user to go out and buy feedstock containers — pressurized acetylene pods don’t grow on trees. Cost difference between pressurized cylinders and cassettes would be minimal: given the technology, it would be about as cheap to run the feedstock through a nanoblock maker and packer as to pump it into cylinders.
But machine 2 could make copies of itself and machine 1 could not.
And yet we know that not only does machine 1 do more stuff, but the range of outputs for the two machines is exactly the same! (Note that machine 1 can make a machine 2. Neither can make a machine 1.)
And yet which is more dangerous? Consider which one would do the government of, say, Iran, today, the most good in terms of bootstrapping itself to full nanotech capability if one of each fell into its hands? Obviously (1).
So I claim that self-replication is essentially worthless as a criterion by which to judge risk of accidents or abuse.