# Singularity, part 2

This the second essay in a series exploring if, when, and how the Singularity will happen, why (or why not) we should care, and what, if anything, we should do about it.

### Part II: What is this thing called Singularity?

Since I was trained, originally, as a mathemetician, I never really liked the term Singularity. It implies that something goes infinite in a finite amount of time, that some function is a hyperbola, asymptotic to a vertical line. That’s pretty hard to come up with in a universe with the physical laws like ours: even if the Earth were suddenly to be transmuted into a sphere of solid U235, the explosion that would result could be described by a simple exponential, albeit one with quite a high exponent.

“The Singularity,” as it is usually described, is what happens after a greater-than-human intelligence is created. This greater-than-human intelligence is usually thought of as an artificial intelligence, running on a computer or computer-like machine, but it could also be mechanical or genetic amplification of human intelligence, or various other things. This AI, then, whatever it is, being smarter than human, will be able to create version 2.0 of itself more quickly, more easily, or with less resources than humans created it. And so on to version 3.0, etc, in a classic positive feedback loop.

Positive feedback loops don’t produce singularities; they produce exponentials — albeit, in some cases, ones with quite high exponents. How high an exponent would be produced by the production of a greater-than-human intelligence?

Actually, it’s easy to produce greater-than-human intelligence: give a human a pencil and paper. Such an augmented human can multiply 10-digit numbers, something an unaided human has a really hard time doing. Want an even more intelligent system? Give the human a computer connected to the internet and access to Google. We humans have been making ourselves more intelligent in ways like this for our entire history as a species, and — by Jove — yes, there has been a positive feedback loop and yes, there has been an exponential increase.

But what would it mean that there was an increase in intelligence (even one with quite a high exponent)? That your computer would greet you with “Cogito ergo sum” instead of “You have mail”? That your toaster would think up an extensively annotated Korean translation of the Bhagavad-Gita from the original Sanskrit while warming your morning slices, and lecture you on it while you ate breakfast?

No — the thing about smart people or machines that makes a difference is that they do whatever it is that needs doing, faster, more easily, more efficiently. They are, in a word, more economically productive. Which means we can finesse the issue of whether we’re talking about increasing human intelligence, or new AI, or whatever: we can measure the “singularity” in terms of the growth rate of the world economy. Local improvements that don’t significantly impact the total clearly aren’t the kind of thing we’re interested in.

Economist Robin Hanson, mentioned in part 1, is way ahead of us. In this paper (see also this one) he analyzes the economic history of the human race, looking for singularity-like jumps in the growth rate. And sure enough, there are at least two big ones: agriculture and the Industrial Revolution. The growth rate for humanity’s gross output jumped by about a factor of 250 with agriculture, and by a factor of 60 with the Industrial Revolution.

Hanson’s analysis indicates that the next jump in the series, assuming the pattern holds, might well be coming up this century: the industrial economy has been going for roughly as many doubling times as the agrarian period did. The key issue, of course, is to guess what the growth-rate jump factor will be this time.

In this paper, Hanson guesses that the jump will simply be in the range of previous ones:

If a new transition were to show the same pattern as the past two, then growth would quickly speed up by between 60‑ and 250-fold. The world economy, which now doubles in 15 years or so, would soon double in somewhere from a week to a month. If the new transition were as gradual (in power-law terms) as the Industrial Revolution was, then within three years of a noticeable departure from typical fluctuations, it would begin to double annually, and within two more years, it might grow a million-fold. If the new transition were as rapid as the agricultural revolution seems to have been, change would be even more sudden.

I personally find this a bit hard to swallow. Growth modes don’t descend from above burnished wings: they result from feedback loops in the economic, technological, and informational system. In the jump from hunter-gatherer to agriculture, the fact that we could live in towns (instead of having to follow game) meant buildings, food storage, increased trade, and so forth. Going from an agrarian to an industrial economy got us into the familiar machine-makes-machine capital investment cycle.

What kind of growth rate would we expect from a mind-makes-mind loop? One possible hint is that the preceding jumps seem to have the structure of a 1-2 punch. Each is a ka-boom where the ka is something informational and the boom is the physical working out of the new knowledge. For the Industrial Revolution, the ka was the printing press and science itself. For agriculture, the ka was writing and proto-writing, such as cave pictures and counting with tally marks.

Now here’s an interesting statistic: the number of scientific papers, starting in the 1700s, had a doubling time (15 years) that foreshadowed the doubling time of the mature industrial economy. We don’t have any way of measuring the doubling time of proto-writing, as far as I know, so this is a shot in the dark.

The modern-day ka to the Singularity’s boom is of course computers and the internet. These have a very well known bundle of growth rates that are on the order of a year doubling time.

So back to the historical jump series. My problem with the notion of a series of more-or-less constant jump factors is that they do in fact lead to a mathematical singularity, as opposed to an exponential curve. But we can fix this and still match the historical pattern with a series of jump factors that declines over time: say, one that goes down by a factor of 4 with each jump. (Or more likely, a function that’s nearly the same at high values but levels out at 1 instead of 0.)

In this case, we would go from the factor of 250 for agriculture to the factor of 60 for the Industrial Revolution to a factor of 15 for the “Singularity.” Which means the doubling time would drop from 15 years to … one year, right in the Moore’s Law range. Our shot in the dark has been followed by a muffled thud.

To sum up: the “Singularity” should best be thought of as the second half of the information technology revolution, extending it to most physical and intellectual work. Overall economic growth rates will shift from their current levels of roughly 5% to Moore’s Law-like rates of 70% to 100%. The shift will probably take on the order of a decade (paralleling the growth of the internet), and probably fall somewhere in the 20s, 30,s or 40s.

0
0