Singularity, part 6

This the sixth essay in a series exploring if, when, and how the Singularity will happen, why (or why not) we should care, and what, if anything, we should do about it.

Part VI: The heavily-loaded takeoff

The fastest software I ever used ran on some of the slowest computers I ever had. Circa 1980, right around the time the original IBM PC was being introduced, there were a number of hobbyist and competitor computers based on processors like the 8086 and the like. These were comfortably less than one mips of processing power, typically had a lot less than one megabyte of memory, and couldn’t do much in terms of what we expect from modern-day systems. But what they could do, such as edit a text file that would fit entirely into memory, they did fast. You looked at 24 lines by 80 columns of text mapped directly into the computer’s memory (at physically hard-wired locations) and things happened instantly.

This was to be compared with before, when we used terminals connected by 300-baud links to time-shared mainframes, and afterward, when the software for micros got bigger and more capable and you were constantly swapping your data to a floppy disk.

how to treat herpes
buy clomid
Download movies

The machine I’m writing this essay on could do the pure bit-flipping work of a million of those micros (e.g. in doing a quantum mechanics simulation of a molecule or a fluid flow simulation of a wind tunnel). It does not, unfortunately, let me be a million times as productive.

On the other hand, I am somewhat more productive. Obviously if I’m doing heavy scientific computation, I’m a lot better off. But even if I’m just writing essays, the enormous indexing and pattern-matching power of the computers at Google save me hours of hunting down facts in a paper-book library. I have software that lets me produce polished, typeset documents that I would have had to go to a professional service for, back in the day. I can produce photorealistic pictures of imaginary scenes for purposes ranging from engineering to art. The horsepower of an 8086 simply wasn’t capable of any of these things.

Diminishing returns

There’s a phenomenon that is implicit in almost every economic analysis: the law of diminishing returns. It says, simply, that each dollar you spend is going to get you something worth less than what you got from earlier dollars. It’s simple because all it says is that if there were something more valuable to get, you would have gotten it first, and put off the less valuable thing.

The same thing is true of computing cycles. The text editor I used on 1980’s micro gave me a significant fraction of the value of the one I’m using now — say, at least 10 percent — for probably one hundred thousandth of the cost in instructions. The first instructions in the editing program allow you to type text onto the screen. The billionth ones animate specular highlights on the simulated button-press as you select between nearly identical typefaces.

The parallel, I hope, is clear. As AI and nanotech pervade the economy through the middle of the century, each additional unit of productive work will be put to a less valuable use, since we’re already doing the most valuable uses with the effort and resources we can presently bring to bear.

In some areas, such as scientific simulation, graphics, data mining, and so forth, every bit of extra computational horsepower is still useful. In others, such as text editing, it is not. There are lots of applications that are in between. The same thing will be true of the impact of nanotech in the physical world, and of AI in the economy in general. For example:

Remember that this will come on at the pace of the computer revolution, more or less, and that we’re somewhere comparable to 1960 right now. Feynman was Babbage, Drexler was Turing, and von Neumann was … von Neumann.

So relax. The really weird stuff shouldn’t hit until after mid-century.

Leave a comment