# Singularity, part 2

## Singularity, part 2

This the second essay in a series exploring if, when, and how the Singularity will happen, why (or why not) we should care, and what, if anything, we should do about it.

### Part II: What is this thing called Singularity?

Since I was trained, originally, as a mathemetician, I never really liked the term Singularity. It implies that something goes infinite in a finite amount of time, that some function is a hyperbola, asymptotic to a vertical line. That’s pretty hard to come up with in a universe with the physical laws like ours: even if the Earth were suddenly to be transmuted into a sphere of solid U235, the explosion that would result could be described by a simple exponential, albeit one with quite a high exponent.

“The Singularity,” as it is usually described, is what happens after a greater-than-human intelligence is created. This greater-than-human intelligence is usually thought of as an artificial intelligence, running on a computer or computer-like machine, but it could also be mechanical or genetic amplification of human intelligence, or various other things. This AI, then, whatever it is, being smarter than human, will be able to create version 2.0 of itself more quickly, more easily, or with less resources than humans created it. And so on to version 3.0, etc, in a classic positive feedback loop.

Positive feedback loops don’t produce singularities; they produce exponentials — albeit, in some cases, ones with quite high exponents. How high an exponent would be produced by the production of a greater-than-human intelligence?

Actually, it’s easy to produce greater-than-human intelligence: give a human a pencil and paper. Such an augmented human can multiply 10-digit numbers, something an unaided human has a really hard time doing. Want an even more intelligent system? Give the human a computer connected to the internet and access to Google. We humans have been making ourselves more intelligent in ways like this for our entire history as a species, and — by Jove — yes, there has been a positive feedback loop and yes, there has been an exponential increase.

But what would it mean that there was an increase in intelligence (even one with quite a high exponent)? That your computer would greet you with “Cogito ergo sum” instead of “You have mail”? That your toaster would think up an extensively annotated Korean translation of the Bhagavad-Gita from the original Sanskrit while warming your morning slices, and lecture you on it while you ate breakfast?

No — the thing about smart people or machines that makes a difference is that they do whatever it is that needs doing, faster, more easily, more efficiently. They are, in a word, more economically productive. Which means we can finesse the issue of whether we’re talking about increasing human intelligence, or new AI, or whatever: we can measure the “singularity” in terms of the growth rate of the world economy. Local improvements that don’t significantly impact the total clearly aren’t the kind of thing we’re interested in.

Economist Robin Hanson, mentioned in part 1, is way ahead of us. In this paper (see also this one) he analyzes the economic history of the human race, looking for singularity-like jumps in the growth rate. And sure enough, there are at least two big ones: agriculture and the Industrial Revolution. The growth rate for humanity’s gross output jumped by about a factor of 250 with agriculture, and by a factor of 60 with the Industrial Revolution.

Hanson’s analysis indicates that the next jump in the series, assuming the pattern holds, might well be coming up this century: the industrial economy has been going for roughly as many doubling times as the agrarian period did. The key issue, of course, is to guess what the growth-rate jump factor will be this time.

In this paper, Hanson guesses that the jump will simply be in the range of previous ones:

If a new transition were to show the same pattern as the past two, then growth would quickly speed up by between 60‑ and 250-fold. The world economy, which now doubles in 15 years or so, would soon double in somewhere from a week to a month. If the new transition were as gradual (in power-law terms) as the Industrial Revolution was, then within three years of a noticeable departure from typical fluctuations, it would begin to double annually, and within two more years, it might grow a million-fold. If the new transition were as rapid as the agricultural revolution seems to have been, change would be even more sudden.

I personally find this a bit hard to swallow. Growth modes don’t descend from above burnished wings: they result from feedback loops in the economic, technological, and informational system. In the jump from hunter-gatherer to agriculture, the fact that we could live in towns (instead of having to follow game) meant buildings, food storage, increased trade, and so forth. Going from an agrarian to an industrial economy got us into the familiar machine-makes-machine capital investment cycle.

What kind of growth rate would we expect from a mind-makes-mind loop? One possible hint is that the preceding jumps seem to have the structure of a 1-2 punch. Each is a ka-boom where the ka is something informational and the boom is the physical working out of the new knowledge. For the Industrial Revolution, the ka was the printing press and science itself. For agriculture, the ka was writing and proto-writing, such as cave pictures and counting with tally marks.

Now here’s an interesting statistic: the number of scientific papers, starting in the 1700s, had a doubling time (15 years) that foreshadowed the doubling time of the mature industrial economy. We don’t have any way of measuring the doubling time of proto-writing, as far as I know, so this is a shot in the dark.

The modern-day ka to the Singularity’s boom is of course computers and the internet. These have a very well known bundle of growth rates that are on the order of a year doubling time.

So back to the historical jump series. My problem with the notion of a series of more-or-less constant jump factors is that they do in fact lead to a mathematical singularity, as opposed to an exponential curve. But we can fix this and still match the historical pattern with a series of jump factors that declines over time: say, one that goes down by a factor of 4 with each jump. (Or more likely, a function that’s nearly the same at high values but levels out at 1 instead of 0.)

In this case, we would go from the factor of 250 for agriculture to the factor of 60 for the Industrial Revolution to a factor of 15 for the “Singularity.” Which means the doubling time would drop from 15 years to … one year, right in the Moore’s Law range. Our shot in the dark has been followed by a muffled thud.

To sum up: the “Singularity” should best be thought of as the second half of the information technology revolution, extending it to most physical and intellectual work. Overall economic growth rates will shift from their current levels of roughly 5% to Moore’s Law-like rates of 70% to 100%. The shift will probably take on the order of a decade (paralleling the growth of the internet), and probably fall somewhere in the 20s, 30,s or 40s.

By | 2017-06-01T14:06:29+00:00 February 12th, 2009|Economics, Machine Intelligence, Nanodot, Nanotechnology|14 Comments

### About the Author: J. Storrs Hall

1. Anonymous February 12, 2009 at 4:49 pm - Reply

It seems pretty unlikely that proto-writing caused the farming revolution.

2. […] MORE THOUGHTS ON THE SINGULARITY, from J. Storrs Hall. […]

3. Anonymous February 13, 2009 at 1:58 pm - Reply

“It seems pretty unlikely that proto-writing caused the farming revolution. ”

I’m not so sure. The planting of seeds was only as small portion of the agricultural revolution.

There were many other things that happened to support agrarian societies, and most of them seemed to have formed before the end of the Younger Dryas cold snap. There was the storage of seed corn, of any grain, which had to be measured, tallied, stored, and then redistributed in the Spring planting, from the records of the tallies. There was the trading over substantial distances in things as basic to the agrarian revolution as the stones and obsidian that took the place of metals in tools for so long. There was the storage of grain for military campaigns, needing allotted contributions, and accounting of the Army’s expenditure of food. Activities crucial to the existence of agrarian societies required many of the information processing abilities of proto-writing.

Regards,

Tom Billings

4. Anonymous February 13, 2009 at 5:08 pm - Reply

Check out this detailed blog http://www.singularity2050.com

5. Anonymous February 13, 2009 at 5:10 pm - Reply

This ‘doubling every year’ exaggeration is why Kurzweil’s predictions are always 50% sooner than they actually happen.

Computing does NOT double every year. It doubles every 18 months. Thus, there are 20 doublings in 30 years, not 30 in 30 years.

Hence, the Singularity will happen in 2060-65. NOT 2045 as Kurzweil says, or even sooner as Storrs Hall says.

6. Anonymous February 13, 2009 at 5:11 pm - Reply

“Overall economic growth rates will shift from their current levels of roughly 5% to Moore’s Law-like rates of 70% to 100%. ”

Wait, the world economy is growing at 5%? I thought we were in a recession!!

Also, Moore’s Law is exactly 58% a year. Not 70-100%.

7. Anonymous February 14, 2009 at 8:41 am - Reply

First, Kurzweil uses the 18 month doubling time in his predictions, if you read the singularity is near, you’ll see this (although some technologies related do double in a year.) Second, the 2045 is not based on Moore’s law directly, it’s the time when Kurzweil thinks a machine will be equivalent to many many human brains (1,000 or a 1 million, I forget) for \$1,000. It’s an ultra-conservative estimate, imo. Even a supercomputer capable of human level AI (which may be possible as soon as a couple of years from now) should be able to kick-start the singularity, depending on things which are simply not known for certain yet. People just don’t agree on these things, so what can you say..

8. Anonymous February 15, 2009 at 2:27 am - Reply

Kurzweil is always wrong by overstating the rate of progress by 50%. In ‘The Age of Spiritual Machines’, he says that ‘computer power doubles every year’. It simply does not.

This is why his 2009 predictions (made in 1999) have mostly not happened. Most of them will happen by 2014. But most will not happen by the end of 2009.

Kurzweil’s 2045 estimate for the Singularity is absurd. That is just 36 years from now. The Singularity will be 2060-65.

9. Anonymous February 15, 2009 at 6:36 am - Reply

Actually, it’s easy to produce greater-than-human intelligence: give a human a
pencil and paper. Such an augmented human can multiply 10-digit numbers,
something an unaided human has a really hard time doing. Want an even more
intelligent system? Give the human a computer connected to the internet and
like this for our entire history as a species, and — by Jove — yes, there has
been a positive feedback loop and yes, there has been an exponential increase.

Although I agree with you, it doesnt capture the essence of what at least I consider to be the Singularity. It is a strong critical argument against that the Singularity actually will happen as proposed by SF writers.
Humans have been increasing their intelligence slowly over millenias, what is to say that a slightly more intelligent machine or man-machine symbiosis will increase its intelligence billions of times faster?

To sum up: the “Singularity” should best be thought of as the second half of the
information technology revolution, extending it to most physical and
intellectual work. Overall economic growth rates will shift from their current
levels of roughly 5% to Moore’s Law-like rates of 70% to 100%. The shift will
probably take on the order of a decade (paralleling the growth of the internet),
and probably fall somewhere in the 20s, 30,s or 40s.

Wasn’t part of the original singularity-idea that an AI more intelligent thatn humans would emerge?

If this is possible I´d prefer to look at our current situation as something similar to the situation when the first protozoic lifeforms emerged.
The elemental structures are there for further development and most computers are connected nowadays. The existing lifeforms (dataviruses) are however barely equivalent of biological viruses.
If/when sentient beings arises from this strata, then we got what I would call the Singularity.

I think your post is a very interesting read, but the economical analysis doesnt have much to do with my image of the singularity.

Comparing the history of killer whales with humans would probably bring more similarities than between humans and sentient computers.
(After all, both humans and killer whales eat other organisms to survive, are prone to mutations, have sex, offspring and different civilizations.)

I´m not saying that your analysis is wrong, just that the artificial lifeforms will have an entirely different economy and it will interfere with our own and, if so inclined, dominate it within a century or two. This is a situation that I dont think can be compared to anything less than the emergence of homo sapiens sapiens, if even that.

10. Anonymous February 16, 2009 at 2:09 pm - Reply

Every 18 months to two years, technology exist to put twice as many transistors on a chip. These new transistors are twice as fast as the prior generation. Hence we obtain 4X computer speed in two years. What follows is a 2X computer speed every year.

11. Anonymous February 17, 2009 at 8:52 pm - Reply

“Every 18 months to two years, technology exist to put twice as many transistors on a chip. These new transistors are twice as fast as the prior generation. Hence we obtain 4X computer speed in two years. What follows is a 2X computer speed every year. ”

That is actually not correct. This is an exact sentence you have lifted from Kurzweil.

Every 18 months, transistor sizes shrink by half. Period. That is why we get a doubling every 18 months. The new transistors are not twice as fast as the previous ones, they are the same speed.

The proof is in the pudding. Today, 1 GM of RAM is about \$20. 15 years ago in 1994 (10 doublings, or 1024X), 1 MB cost \$20. 15 years before that in 1979 (10 doublings, or 1024X again), 1 KB cost \$20. In 2024, 1 TB will cost \$20.

So the 18-month doubling is quite exact. It is not 12 months.

12. […] In the previous essay in this series, I argued top-down, from historical and economic precedents, that the coming singularity might look approximately like the second half of the computer/internet revolution. Today I’ll argue the same conclusion from the bottom up: by looking at things from the point of view of the individual AI. […]

13. Anonymous February 23, 2009 at 8:53 pm - Reply

I think this argument that the singularity can be grasped by looking at the world economy is a bit strained. A true singularity would rather quickly unleash MNT. With that the market and the entire scarcity presumption behind tradional economics would be largely over. If that isn’t an economic singularity I find it difficult to imagine what would be. Simultaneously the cost per unit of intellectual work, increasingly the only kind of much economic value, would quickly drop precipitously given AGI. So costs of production quickly tend toward zero. The idea that these sorts of changes will result in only a factor of 4 increase is utterly bizarre.

14. […] As far as the Singularity is concerned, Charlie’s notion of it is not the same as mine, so arguing this point would be a case of talking past each other. But I will strongly claim that an AI/nanotech revolution that kicks the economy into a growth mode that looks like Moore’s Law, “is going to affect everything.” […]