Singularity, part 1

Singularity, part 1

This the first essay in a series exploring if, when, and how the Singularity will happen, why (or why not) we should care, and what, if anything, we should do about it.

Part I: The Singularity and its Discontents

The concept of the Technological Singularity is so clearly part of the zeitgeist that it surely needs no introduction to this audience. There’s now even a recently-formed Singularity University which proposes to study it as its primary subject matter.

The formation of Singularity U. was greeted with yawns in some quarters, however. On Ars Technica, John Timmer writes “Is the world ready for a university that’s based on a concept that may not even exist?” While this sounds a bit disingenuous — after all, consider all the religious universities out there — it does seem that getting a good grasp of all the areas S.U. covers would be more likely to take nine years than nine weeks.

On his blog, Peter Glaskowsky writes

This all sounds wonderful: that is, I wonder if Kurzweil, Diamandis, and Page actually believe that the solutions to poverty, hunger, and pandemics will be found in technology.

It seems to me that it would be more useful to take these students and executives through some classes on philosophy, theology, politics, sociology, and history–fields they’re probably not sufficiently aware of and that are much more directly related to the causes of, and possible cures for, social problems.

He has a point: poverty, hunger, and pandemics are eminently curable with the technology of the 20th century; the reason that they exist at all in some parts of the world has much more to do with bad government preventing the known solutions from being used.

On the other hand, we’ve had philosophy, theology, politics, sociology, and history for, well, most of history — and they haven’t done such a great job solving these problems. In fact, the bottom line is that, historically, the problems that technology has addressed have gotten solved, and the ones that were dependent on politics and so forth have not.

On the technical side of things, there is a now classic paper by Robin Hanson which argues that explosive economic growth — a fair proxy for the Singularity for the moment — requires a rate of return for general investment not likely to be seen before a century hence.

The part of Hanson’s argument that seems most cogent is that the growth in some parts of the economy that have seen high rates — computers and the internet, for example — aren’t generally applicable enough to push the rest of the economy into the accelerated growth modes. For example, if your business uses computer-controlled lathes, the fact that the computer drops in cost by half each year doesn’t do you all that much good as long as the cost of the lathe, floor space, power, and so forth stay high.

In fact, there seems to be a sort of Amdahl’s Law at work. Amdahl’s Law, for those of you not involved with parallel algorithms, points out that a program that’s half parallelizable and half inherently sequential can’t be made more than twice as fast, no matter how many processors you try to run it on. If you apply this thinking to the economic impact of any given technology improvement, it simply says that the non-technological bottlenecks will dominate, even if the tech part is completely perfect and completely free.

In his book Nano (pp 131-133) , Ed Regis describes a phenomenon in the early nanotech study groups they called the “Miller point,” named after Mark Miller when he realized that nanotechnology would change everything. Absolutely everything.

The concept of Singularity as generally understood today borrows a lot from Drexler’s concept of Breakthrough , as well as from Isaac Asimov’s concept of the Intellectual Revolution (in the foreword to this book) It is not only nanotech as materials science and mechanical engineering that is transformative, but our understanding and control of biology, the brain, and the ultimate mechanization of intelligence itself.

Of course, if we simply say we’ll build machines smarter than ourselves and they’ll solve the problems we can’t, we’ve pretty much resigned ourselves to being on the outside of the event horizon of the Singularity — and not knowing whether it’s something to work for or to avoid. I think we can do far, far better than that: the coming technological revolution is not as opaque as it’s cracked up to be. In follow-on essays, I propose to treat the nature of the Singularity, its effects on the various things we currently see as problems, as well as other improvements we only see as opportunities now.

Stay tuned.

By | 2017-06-01T14:06:29+00:00 February 10th, 2009|Economics, Machine Intelligence, Nano, Nanodot, Nanotech, Nanotechnology|9 Comments

About the Author:


  1. Anonymous February 10, 2009 at 8:16 am - Reply

    Well I tend to hold to the notion, based on abundant evidence, that human beings ARE inherently selfish and tend to evil, but, I also strongly support and believe we must use advanced technologies and make the nanotech age happen, and, think about it this way:

    One reason why we have not ended poverty with our advanced bulk technology is that it is still INHERENTLY EXPENSIVE for the average person to have control over manufacturing technologies, and those technologies are in and of themselves very limited.

    Once we have Self Expanding Self Replicating Molecular Meso Macro Replicator type systems, it becomes INHERENTLY CHEAP AND EASY to make things!!!

    This has two core main areas of use:

    1 Even WITH THE INHERENTLY SELFISH NATURE of man, it makes it EASY and CHEAP without much personal cost for more people to do good deeds through the tech, like make copies of it and give it away to poor people, and, make food clothes medicines etc etc etc with nanotech. This will almost guarantee it gets out to the people who need it the most.

    2 This also makes it easier and cheaper for nefarious minded people such as terrorists to take this technology and make massive weapons and not so massive weapons to kill and harm other people.

    One of my fears is this: How will the governments of the world, both repressive and less repressive, react, and deal with this? Will they say “Alright, we need more Constitutional libertarian rules, because there is no way we can stop people from using this nano factory stuff”, or, will they say “Now we have to be more repressive and more tyrannical and hard handed, and use the new technology to permeate society with surveilliance machines” ?

  2. Anonymous February 10, 2009 at 2:30 pm - Reply

    The average person is NOT going to have unfettered control over a self-replicating nanorobot. You can call it tyranny if you like, but it’s just not going to happen. The planet wouldn’t last 3 minutes before someone killed everyone else. Liberty is not worth death of the human race, not by a long shot. Ideally, a ‘fair’ AI would be in control, that would obey human commands as long as they followed ‘rules’ that set forth basic rational actions that can happen, like you could ask the AI to cook you a steak and it would, but if you asked it to kill your neighbor it would refuse. If you want to call that slavery and repression, fine, but I see no other choice. I’m sure it will be possible to effectively ‘prove’ the AI will follow the rules it’s supposed to. There will probably be much whining and gnashing of teeth over the ‘possibility of the AI going bezerk’ but the alternative (letting anyone do what they want) would be absurd, and short.

  3. Anonymous February 10, 2009 at 8:18 pm - Reply

    In his book ‘The Singularity is Near’, Kurzweil explains that like the Internet today with anti virus software, in an advanced molecular manufacturing era would need to have an immune system in place to prevent a disaster. Also, today we already have very sensitive structures that are prone to be hacked and cause a lot of trouble. Additionally, when personal tabletop factories can manufacture anything from food to furniture, including cars and houses, our current money based economy would be replaced by something else. How would any tyrannical government or other wise pay law enforcement? I’d like to quote Julius Caesar in the 1963 film ‘Cleopatra’; “Legionaries make the law legal”.

  4. Anonymous February 10, 2009 at 11:22 pm - Reply

    There will always be someone to make hardware and software do more then it was intended to do. That’s what modern day hackers are. If nanotechnology is released in any form it will be just a matter of time before any security measures are exploited.

    This idea that the people creating nanotechnology are motivated simply for the betterment of mankind is naive. The truth is a good chunk of the funding is coming from military organizations in almost every major country.

  5. Anonymous February 11, 2009 at 2:23 pm - Reply

    By the way why are we all using the anonymous name? LOL, just wondering.

    Do you all think off budget black programs have already succeeded in building working assembler devices or something similiar? The problem with such speculation is that until we are told or it is proven, it is a dead end conversation, ie, we will never know unless we see evidence, so forget about it, I guess.

    Aside from AI control, and nano immune systems, what are other methods that would allow us to have citizen owned nanotech without major disasters and terrorism?

  6. […] SOME THOUGHTS ON THE SINGULARITY, from J. Storrs Hall. […]

  7. Anonymous February 12, 2009 at 12:53 pm - Reply

    I think you are somewhat misinterpreting Hanson’s paper. His point is that, contrary to Miller, nanotech *doesn’t* change everything. Nano would only decrease production costs, but that will be an increasingly small fraction of the cost of future goods. The only thing that changes everything is increased intelligence, either human or machine. Hanson favors a scenario in which human brains are scanned and “uploaded” into computers, where they can be sped up and improved. Others expect to see super-intelligent AIs developed. While either of these technologies might well depend on nanotech, or at least be greatly facilitated by it, nano is not the key element in making them come true. It’s entirely possible that we could have nanotech for decades without achieving super intelligence, and during that time human brainpower will continue to be the bottleneck preventing a true Breakthrough.

  8. Anonymous February 12, 2009 at 12:56 pm - Reply

    Just as the History Channel keeps turnning out shows about (possible) UFO sightings we are going to keep hearing about the Mad Scientist who has achieved the impossible. The robot that can think for itself. Has a government come up with anything even close? Probably. Will we ever see or hear about it? I doubt it. Unless they have developed the so called “Army of Bot’s” where they will have complete control of the entire world econimies or one of them get’s their pix inthe paper because it killed someone.

    C. Webster Rose

  9. Anonymous February 16, 2009 at 9:39 am - Reply

    What is consciousness?
    Where is it located?
    Perhaps it is more widespread than is currently understood or accepted.
    Who is to say that the pervasive internet, connected to our minds, is not already conscious?

Leave A Comment