This the sixth essay in a series exploring if, when, and how the Singularity will happen, why (or why not) we should care, and what, if anything, we should do about it.
Part VI: The heavily-loaded takeoff
The fastest software I ever used ran on some of the slowest computers I ever had. Circa 1980, right around the time the original IBM PC was being introduced, there were a number of hobbyist and competitor computers based on processors like the 8086 and the like. These were comfortably less than one mips of processing power, typically had a lot less than one megabyte of memory, and couldn’t do much in terms of what we expect from modern-day systems. But what they could do, such as edit a text file that would fit entirely into memory, they did fast. You looked at 24 lines by 80 columns of text mapped directly into the computer’s memory (at physically hard-wired locations) and things happened instantly.
This was to be compared with before, when we used terminals connected by 300-baud links to time-shared mainframes, and afterward, when the software for micros got bigger and more capable and you were constantly swapping your data to a floppy disk.
The machine I’m writing this essay on could do the pure bit-flipping work of a million of those micros (e.g. in doing a quantum mechanics simulation of a molecule or a fluid flow simulation of a wind tunnel). It does not, unfortunately, let me be a million times as productive.
On the other hand, I am somewhat more productive. Obviously if I’m doing heavy scientific computation, I’m a lot better off. But even if I’m just writing essays, the enormous indexing and pattern-matching power of the computers at Google save me hours of hunting down facts in a paper-book library. I have software that lets me produce polished, typeset documents that I would have had to go to a professional service for, back in the day. I can produce photorealistic pictures of imaginary scenes for purposes ranging from engineering to art. The horsepower of an 8086 simply wasn’t capable of any of these things.
There’s a phenomenon that is implicit in almost every economic analysis: the law of diminishing returns. It says, simply, that each dollar you spend is going to get you something worth less than what you got from earlier dollars. It’s simple because all it says is that if there were something more valuable to get, you would have gotten it first, and put off the less valuable thing.
The same thing is true of computing cycles. The text editor I used on 1980’s micro gave me a significant fraction of the value of the one I’m using now — say, at least 10 percent — for probably one hundred thousandth of the cost in instructions. The first instructions in the editing program allow you to type text onto the screen. The billionth ones animate specular highlights on the simulated button-press as you select between nearly identical typefaces.
The parallel, I hope, is clear. As AI and nanotech pervade the economy through the middle of the century, each additional unit of productive work will be put to a less valuable use, since we’re already doing the most valuable uses with the effort and resources we can presently bring to bear.
In some areas, such as scientific simulation, graphics, data mining, and so forth, every bit of extra computational horsepower is still useful. In others, such as text editing, it is not. There are lots of applications that are in between. The same thing will be true of the impact of nanotech in the physical world, and of AI in the economy in general. For example:
- The most obvious place where nanotech makes a huge difference is space development. Today’s best technology is none too good and hideously expensive. Nanotech is going to make the difference between doing it and not doing it.
- Any technology that is good enough to let you live on Mars is probably good enough to let you live in Tahiti — or more precisely, on a boat or floating city at Tahiti’s latitude, or indeed anywhere in the temperate or tropic regions. Tahiti itself is a lot like a boat now, in that everything except a few native crops has to be imported. Even a minor revolution in manufacturing and transportation will make it economically feasible to settle the oceans. I’d guess that by 2100, there will be a base on Mars, but more people than the world’s current population will live on (or in) the seas of Earth.
- Similarly, there is a lot of land on that is expensive or uncomfortable to live on (but cheap to buy), in places like Canada and Russia. Expect nanotech to ameliorate this quite a bit. The bottom line is that advancing technology will give us something like five times as much livable area as we have now by the end of the century.
- With today’s personal-level hardware you can have photorealistic pictures or real-time action in a computer game, but not both. That distinction is on the way out, though. In the physical world, the ability to create high-fidelity virtual worlds that are indistinguishable from reality will take a long time. However, ones that are serviceable subsets will show up fairly soon.
- The early text editor had its font and the position of each character hard wired, fixed to the screen. Modern computers have pixel-mapped screens, and can make characters whatever size, shape and position they want. Similarly, early nanotech will feature fixed machines of the kind we’re used to now: flying cars, humanoid robots, buildings. Later nanotech will be a lot more fluid: cities composed of Utility Fog or the functional equivalent.
Remember that this will come on at the pace of the computer revolution, more or less, and that we’re somewhere comparable to 1960 right now. Feynman was Babbage, Drexler was Turing, and von Neumann was … von Neumann.
So relax. The really weird stuff shouldn’t hit until after mid-century.