This the fourth essay in a series exploring if, when, and how the Singularity will happen, why (or why not) we should care, and what, if anything, we should do about it.
Part IV: When
So when is all this going to happen? To quote Mark Twain, I’m gratified to be able to answer that question immediately:
It depends.
In nanotech, I think that if there were a major, well-funded effort focused on getting to nanomachinery by trying lots of different pathways simultaneously — a full-court press — we might see some early limited lab prototypes in a decade; but there won’t be such an effort. Thus a better estimate would be 2030 or even 2040. (Counting from 1960, how soon would there have been a manned moon landing if there had been no Apollo project?)
In AI, the situation is perhaps brighter. The resources available to AI are likely to increase in the 20-teens as robotics and smart systems become more generally useful. As AIs start to cross the human range of capability, they will attract more investment and progress will accelerate. One key element in the takeoff is that as computers keep getting cheaper, a lot more people will be able to experiment.
On the other hand, AI has a disadvantage to nanotech. We know to some extent what the goal is in nanotech, and what an autogenous manufacturing system looks like. We’re working toward something we understand. But in AI, we don’t really know the corresponding key trick: how do you get a mind to extend itself? There are lots of ideas, including mine, but in a real sense the goal of AI is less well understood than that of nanotechnology.
Even so, what the goal would look like from the outside — an intelligent human — is perhaps more clearly seen than the nanotech one, and has been understood for 50 years. This has perhaps been at least as much a curse as a blessing. AI had its dark ages in the 80s, its renaissance in the 90s with robotics and machine learning theory, and is making pretty good progress.
I would guess — and this is blatantly a speculation, albeit a fairly well informed one, that the “secret trick” of AI will fall in the next decade. That means that the 20s will see robots not just as good as humans at specific, well-defined tasks, but able to learn new tasks the way humans do.
Please remember that AIs won’t necessarily be autonomous robots — most of them will be like having a secretary built into your computer, a phone answering system that acts like an intelligent receptionist, a self-driving car (although I imagine that in the 20s having a robot butler will be a status symbol for a while). Things (including all the software you interact with) will get smarter.
If my guesses are right, by 2030 we would be beginning to see some significant economic pressure from the AI sector. And the 30s will be interesting times.
One of the more interesting aspects might be nanotechnology. Having AI online might shorten the time from laboratory achievement of nanomachines to major real-world applications significantly. Imagine, for example, the amount of engineering necessary to make a Drexler space suit work properly and safely. AI engineers (and AI-enhanced human ones) would make a huge difference to the development time.