Singularity, part 6

Singularity, part 6

This the sixth essay in a series exploring if, when, and how the Singularity will happen, why (or why not) we should care, and what, if anything, we should do about it.

Part VI: The heavily-loaded takeoff

The fastest software I ever used ran on some of the slowest computers I ever had. Circa 1980, right around the time the original IBM PC was being introduced, there were a number of hobbyist and competitor computers based on processors like the 8086 and the like. These were comfortably less than one mips of processing power, typically had a lot less than one megabyte of memory, and couldn’t do much in terms of what we expect from modern-day systems. But what they could do, such as edit a text file that would fit entirely into memory, they did fast. You looked at 24 lines by 80 columns of text mapped directly into the computer’s memory (at physically hard-wired locations) and things happened instantly.

This was to be compared with before, when we used terminals connected by 300-baud links to time-shared mainframes, and afterward, when the software for micros got bigger and more capable and you were constantly swapping your data to a floppy disk.

The machine I’m writing this essay on could do the pure bit-flipping work of a million of those micros (e.g. in doing a quantum mechanics simulation of a molecule or a fluid flow simulation of a wind tunnel). It does not, unfortunately, let me be a million times as productive.

On the other hand, I am somewhat more productive. Obviously if I’m doing heavy scientific computation, I’m a lot better off. But even if I’m just writing essays, the enormous indexing and pattern-matching power of the computers at Google save me hours of hunting down facts in a paper-book library. I have software that lets me produce polished, typeset documents that I would have had to go to a professional service for, back in the day. I can produce photorealistic pictures of imaginary scenes for purposes ranging from engineering to art. The horsepower of an 8086 simply wasn’t capable of any of these things.

Diminishing returns

There’s a phenomenon that is implicit in almost every economic analysis: the law of diminishing returns. It says, simply, that each dollar you spend is going to get you something worth less than what you got from earlier dollars. It’s simple because all it says is that if there were something more valuable to get, you would have gotten it first, and put off the less valuable thing.

The same thing is true of computing cycles. The text editor I used on 1980’s micro gave me a significant fraction of the value of the one I’m using now — say, at least 10 percent — for probably one hundred thousandth of the cost in instructions. The first instructions in the editing program allow you to type text onto the screen. The billionth ones animate specular highlights on the simulated button-press as you select between nearly identical typefaces.

The parallel, I hope, is clear. As AI and nanotech pervade the economy through the middle of the century, each additional unit of productive work will be put to a less valuable use, since we’re already doing the most valuable uses with the effort and resources we can presently bring to bear.

In some areas, such as scientific simulation, graphics, data mining, and so forth, every bit of extra computational horsepower is still useful. In others, such as text editing, it is not. There are lots of applications that are in between. The same thing will be true of the impact of nanotech in the physical world, and of AI in the economy in general. For example:

  • The most obvious place where nanotech makes a huge difference is space development. Today’s best technology is none too good and hideously expensive. Nanotech is going to make the difference between doing it and not doing it.
  • Any technology that is good enough to let you live on Mars is probably good enough to let you live in Tahiti — or more precisely, on a boat or floating city at Tahiti’s latitude, or indeed anywhere in the temperate or tropic regions. Tahiti itself is a lot like a boat now, in that everything except a few native crops has to be imported. Even a minor revolution in manufacturing and transportation will make it economically feasible to settle the oceans. I’d guess that by 2100, there will be a base on Mars, but more people than the world’s current population will live on (or in) the seas of Earth.
  • Similarly, there is a lot of land on that is expensive or uncomfortable to live on (but cheap to buy), in places like Canada and Russia. Expect nanotech to ameliorate this quite a bit. The bottom line is that advancing technology will give us something like five times as much livable area as we have now by the end of the century.
  • With today’s personal-level hardware you can have photorealistic pictures or real-time action in a computer game, but not both. That distinction is on the way out, though. In the physical world, the ability to create high-fidelity virtual worlds that are indistinguishable from reality will take a long time. However, ones that are serviceable subsets will show up fairly soon.
  • The early text editor had its font and the position of each character hard wired, fixed to the screen. Modern computers have pixel-mapped screens, and can make characters whatever size, shape and position they want. Similarly, early nanotech will feature fixed machines of the kind we’re used to now: flying cars, humanoid robots, buildings. Later nanotech will be a lot more fluid: cities composed of Utility Fog or the functional equivalent.

Remember that this will come on at the pace of the computer revolution, more or less, and that we’re somewhere comparable to 1960 right now. Feynman was Babbage, Drexler was Turing, and von Neumann was … von Neumann.

So relax. The really weird stuff shouldn’t hit until after mid-century.

About the Author:


  1. Anonymous March 5, 2009 at 1:05 pm - Reply

    Until after mid-century? Let’s see, according to Moore’s law and current power of supercomputers, supercomputers around 2050 will be 2,097,152 more powerful than a human brain (assuming they reach human level computational power in 2010 and double every two years.) Do you really think it is going to take that long and that much computation for ‘weird’ stuff to happen? I seriously doubt it, I expect ‘weird’ things in the first half of the next decade personally (super computers 2-8 times more powerful than a human brain) but then I never subscribed to the strange belief that humans are special and nothing can ever do what we can do.

    One more point, about ‘useful’ power of a computer. You are generalizing quite a bit. If I buy a million dollar supercomputer and only use it to run calc.exe to add numbers, can I rationally make the statement that a million dollar supercomputer in 2009 with teraflops of capacity is no better than a 1974 calculator that cost $300 bucks and ran at less than a MIP? Saying that reflects more on you than computers and accelerating technology. A word processor is never going to be much faster than the first ones at just typing letters, because even the first ones spent most of their time waiting for user input, if you could figure out how to speed up your typing a trillion-fold, you would see great utility in modern computers when typing letters, etc. You have not shown that computers have diminishing returns, only that humans can’t keep up with computers. You touched on 3D rendering, but there are tasks such as moleculor simulations, drug simulations, brain simulations, structure simulations, etc. etc. that your 1980 1 MIP computer couldn’t dream of doing, that will lead to huge changes in society. Imagine designing the F-22 with an 8088 PC, or simulating a rat brain on it. It couldn’t be done, these are useful things that are impacting us, imo, far more than the things that were done in the 1980s on those computers, that people probably argued had diminishing returns over type-writers. So you should broaden your perspective and think about this issue some more.

    I don’t mean to attack you, but it seems you missed such obvious ideas in your thinking about this stuff.

    James G.

  2. J. Storrs Hall March 6, 2009 at 9:23 am - Reply

    I’m just using computing as an analogy for the impact of physical nanotech in this essay; of course computing itself will continue to improve from its current state. But the law of diminishing returns still holds: consider a carpet with a controller for each individual fiber, that gave your feet a massage (or cleaned your shoes) as you walked across it.

    One thing we already know about intelligence, and that will apply to AI as well, is that you can absorb an unlimited amount of it to little effect in a bureaucracy where everybody is basically scheming against each other to get the power to force each other to fill out more forms. There’s no reason to expect that to stop. Our software systems have even more bloat than our government.

    On the other hand, there is a huge upside in the possible capabilities, and as the computing experience shows, the total net effect is positive. This essay was primarily an attempt to point out that the experience in computation in the past 50 years will be a reasonable guide to the trajectory (and timescale) we should expect in the physical world with nanotech in the next 50.

  3. […] MORE THOUGHTS ON THE SINGULARITY, from J. Storrs Hall. […]

  4. Anonymous March 9, 2009 at 2:20 am - Reply

    After mid-century? Didn’t you say in Part II that the economic singularity (where world GDP doubles every 18 months) could happen in the 2020s, 30s, or 40s?

    Can you reconcile this contradiction for us?

  5. Anonymous March 9, 2009 at 9:21 am - Reply

    Obama is taxing away investable capital from potential investors, so nanotech and biotech will slow, postponing the Singularity indefinitely. Is that what you who supported Obama consider to be foresight? Will the benefits of the singularity now flow from China? Think about it.

  6. Anonymous March 9, 2009 at 9:39 am - Reply

    He DID refer to “the really weird stuff” for after mid-century. Begs the question of what is “weird” vs. “not weird”.

  7. Anonymous March 9, 2009 at 12:17 pm - Reply

    Diminishing returns vs. network effects:

    When networks grow they become more valuable.

    Consider the future of biology in the coming decades.
    Cost of sequencing a genome drops.
    More genomes combined with massive computation generate modest correlations between DNA regions and traits.
    Knowing where in the genome to look, scientists will sequence DNA from people from the extreme trait range.
    This will lead to discovery of rare variants with large effect on the trait.
    This will lead to identification of coding-DNA and regulatory DNA underlying a trait.
    This will lead to identification of proteins and molecular pathways underlying a trait.
    This will lead to computer models of biological mechanism that accurately predict phenotype from genotype.
    This will revolutionize medicine and agriculture.
    As knowledge accumulates, each piece adds more value to the whole.

    I suspect such network effects will be common in knowledge systems. Intelligence is probably a complex, network-like, knowledge system that will benefit greatly from massive computation. I.e., hard take-off AGI.

  8. Anonymous March 9, 2009 at 7:54 pm - Reply

    Let me ask you all for your views:

    When do you estimate we will have:? :

    1 Mass production of Diamondoid and Fullerene networks
    2 Basic Programmable Molecular Assembler devices
    3 Mass produced true Nano computers
    4 Cell Repair machines
    5 Artery Cleaning machines

  9. Anonymous March 10, 2009 at 7:53 pm - Reply

    “But the law of diminishing returns still holds: consider a carpet with a controller for each individual fiber, that gave your feet a massage (or cleaned your shoes) as you walked across it. ”

    That sounds a lot like someone in the 1800’s saying “in the 20th century, they’ll breed faster horses for transportation.” I mean, that’s incredible short-sightedness, imo. That’s something Smalley would say.

    And I seriously doubt nanotech is going to look like the last 50 years of computing. Computers don’t self-replicate, each generation of chip and manufacturing had to be paid for by consumers.
    If the first CPU self-replicated, we would not have HAD the last 50 years of computing..

    “Let me ask you all for your views:

    When do you estimate we will have:? :

    1 Mass production of Diamondoid and Fullerene networks
    2 Basic Programmable Molecular Assembler devices
    3 Mass produced true Nano computers
    4 Cell Repair machines
    5 Artery Cleaning machines ”

    Freitas and his group, plan to have DMS by 2012. And several groups (novamente, google, nasa, darpa, etc.) all plan to have full AI by 2011 or 2012. I predict all those things and more no later than 2013 (most people in this field’s opinions differ from mine though, but I think my reasoning and evidence is stronger. I doubt most people will believe it until it’s here though.)

    James G.

  10. Anonymous March 11, 2009 at 3:39 am - Reply

    ‘I think my reasoning and evidence is stronger’ Care to share?

    Its the full AI by 2013 I find the most questionable. Put aside the hardware factor- we’ll get the hardware, that’s not an issue- but if by full AI you mean human level intelligence, man we can barely *define* that. Don’t get me wrong, I consider there to be nothing magical about our brains, I think we can take a divide and conquer approach and knock it out a lobe at a time, but right now I think we’re still mapping out our strategy.

    Anyway, would love a link.

  11. Anonymous March 11, 2009 at 3:09 pm - Reply

    “Meet Novamente, and Dr. Ben Goertzel. Novamente’s mission statement is to have self modifying human level intelligence in roughly 2012.”

    If you dig around that site you’ll find more links to the original sources.

  12. Anonymous March 12, 2009 at 12:27 am - Reply

    Many thanks, I’ll take a look.

  13. Anonymous March 12, 2009 at 4:28 pm - Reply

    very nice post, thanks!!!!!

  14. Anonymous March 14, 2009 at 1:51 pm - Reply

    I think we may have flying cars as early as 1999. 😛

    The notion of forecasting or prognosticating this advance or that event is kind of absurd. For one thing, the mediums these developing technologies are being created in have seeming arbitrary and more importantly unknown constraints. They may make self-cleaning arteries or self-repairing cells, but what if cells and tissue structures become obsolete? Even if you find the proper criteria for anticipating a singularity, because advances are transactional and distributed across several dimensions of inter-related fields what you’re actually looking at becomes a line of best fit on a statistical graph near an asymptote. Here the law of diminishing returns does apply.

  15. Anonymous March 19, 2009 at 3:24 pm - Reply

    “But the law of diminishing returns still holds: consider a carpet with a controller for each individual fiber, that gave your feet a massage (or cleaned your shoes) as you walked across it.”

    Nice carpet! It’s very long, and as groups of fibres in front of my feet spell “Get ready, 3, 2, 1”, I brace like a surfer. But the acceleration is gentle; soon I’m cruising at 40 km/h. So glad we have no cars in this city.


  16. Anonymous June 3, 2009 at 9:23 pm - Reply


Leave A Comment