With the Singularity Summit fast approaching, it’s worth spend a little time pondering the perennial question of nanotechnology vs AI: which will happen first, will they be independent, symbiotic, or synergetic, and so forth?
I say perennial because this is a question that has been discussed at Foresight meetings ever since the first Conference 20 years ago. AI was mentioned as a potentially disruptive technology in Engines of Creation — whether or not it was autonomous, human-level intelligence, automated design systems would enable the creation of highly complex nanosystems, well beyond the capabilities of mere human designers.
How did that prediction pan out? I would have to say that it was so accurate, and happened so soon, that it’s taken for granted today — human designers with pencil and paper would have no chance of designing any or today’s complex engineered systems. Like many areas, complex design is one that was once considered AI but isn’t any more. In the 90’s, as part of a big AI project at Rutgers, I wrote a program that designed pipelined microprocessors given a description of the desired instruction set. That kind of thing was already beginning to be considered merely “design automation” rather than AI by then, and it certainly would be now.
How about “real” AI, or AGI as it is beginning to be called now?
First of all, it is interesting to note that there are some strong similarities between real AI and real nanotechnology. Historically, they both begin with a clear technological vision of great power. Great excitement and intellectual ferment grew up around the ideas. As the fields began to grow, however, the availability of funding attracted many people whose goals had more to do with getting money than following through to the vision. On different timescales, both fields experienced a Diaspora where most — most — of the actual research dealt with near-term applications and did little to advance the central vision.
However, evolution works, and so eventually both fields will break through to their visions by trying everything. We can’t even say which of the sidetracks will turn out to have been dead ends and which ones are essential detours around roadblocks.
Foresight was strongly involved in the Productive Nanosystems Roadmap and we are now a sponsor of the AGI Roadmap. My futurist’s intuition tells me that real AI is about a decade off, and real nanotech a decade after that. But we shall see.
Neo-luddites tell us that giving humanity whatever powerful new technology they fear, would be like “giving loaded guns to children.” And there’s certainly a strain of truth to that — our political process, for example, has managed to take nuclear power, which could have provided clean energy for everyone, and instead create weapons sufficient to wipe us all out pointed at each other on hairtriggers. (Of course the neo-luddites themselves bear a lot of the blame for that — at least the lack of clean energy part.)
This would doubtless be the case with nanotech too. The power, in the sense of the capability of create or destroy, of nanotech exceeds nuclear significantly. We could easily wind up in a world of nano-weapons but no nanofactories.
One could even say the same for AI. But AI is capable of being different from other powerful technologies, if we build it right. There are a lot of likely pathways to widespread, useful, non-weaponized AI.
One could argue that if there are children lost in a woods full of wolves and bears, they would be better off with guns than without. But it’s still a tough call. With AI, however, we have an option totally new in the history of powerful technologies. We can give them … an adult.