More Thoughts on the Singularity Summit

More Thoughts on the Singularity Summit

Several of the talks at the Summit might be lumped together under the heading “AI — when and how?”

The two main pathways were the synthetic approach, talked about by people like Juergen Schmidhuber, Ben Goertzel and Itamar Arel, and brain emulation, talked about by such people as Anders Sandberg, David Chalmers, and Ray Kurzweil. Kurzweil’s stance is well known: brain science is advancing at such a rate, and computer technology is advancing at such a rate, that by some foreseeable date we’ll know how the brain works and have the horsepower to emulate it. His analysis is in some sense an existence proof for AI in the not-too-distance future.

Ray puts the date for this at 2029; Itamar and Ben think the synthetic approach could work earlier if sufficient resources were applied. Itamar’s talk in particular was a nice overview of the state of the art in machine learning and other technologies with the argument that we have all the parts, we need only put them together. But at the moment we don’t have an existence proof for this path.

Today’s thought, though, has to do with an analysis by Chalmers on the potential of creating superintelligence.  He dismissed brain emulation as a possibility, because all you get out of it is a human brain’s worth of intelligence. This leaves us without an existence proof for superintelligence, although our instinct is that a synthetic intelligence approach could get there.

I claim, though, that we do have an existence proof for superintelligence: it’s not humans, but human societies. Put a thousand (emulated) brains in a box, and crank up the clock speed to whatever you can. Build in all the communications substrate they might need, and turn them loose. You can try different forms or internal organization — literally, try them, experimentally — and give the internal brains the ability to mate electronically, have children, teach them in various ways. Some forms of human organization, for example the scientific community over the past 500 years, have clearly demonstrated the ability to grow in knowledge and capability at an exponential rate.

In what way could you argue such a box would not be a superintelligence? Indeed, some very smart people such as Marvin Minsky believe that this is pretty much the way our minds already work. And yet this “Society of Minds” would be a model we intuitively understand. And it would help us understand that, in a sense, we have already constructed superintelligent machines.

By | 2017-06-01T14:21:12+00:00 October 6th, 2009|Machine Intelligence, Nanodot|4 Comments

About the Author:


  1. […] can read his thoughts on the Singularity Summit here, here, and here. […]

  2. Chris Irwin Davis October 9, 2009 at 9:47 am - Reply

    Any thoughts on how many human-level minds we’d need to achieve superintelligence? Do we get a linear return? If not, do you have any intuition on where the “knee” of the curve would be?

    Would the population of the “Society” need to have a spectrum of capability (i.e. analogous to IQ distribution curve among humans) to be most effective? Or would it be more beneficial to have only Smart brain-nodes in the Society/Network? I can envision arguments that societies derive benefit from drones that cannot be otherwise be satisfied.

    Thanks for giving me a lot to think about!

  3. J. Storrs Hall October 10, 2009 at 2:00 am - Reply

    @Chris: I would guess you’d need a wide range of temperaments (gets bored easily or not, detail oriented or big picture type) and specific talents rather than IQ per se.
    Chances are once we learn to emulate brains we’ll discover how to build many of our subconscious processes into machines as narrow AIs that lack the brittleness of current programs. That might obviate many of the roles currently done by the lower end of the bell curve.

  4. Crazy October 11, 2009 at 8:56 pm - Reply

    I like the way you stated that human-minds’ automatic ability to create societies when separated from each-other is the present reality of the evolution of Artificial intelligence.

    Maybe we are the evolved A.I. on a different level. We might be a successful seed of a perfect, naturally evolved utopia, and we(the observers) are a necessary and destined limestone in the evolution of our universe! An iteration that has been repeated in an almost infinite scope using some super-sophisticated means. Maybe we WILL be THE UNIVERSE REPEATED. – coming out this fall

    Or maybe everything is random or this is the one of many failed attempts at a perfect civilization, Cause thats what we’re after.

    This in an short conversation:

    -Me: Hey man, check this theory of mine out out…
    -Other person: I hear ya man. Deep thoughts.(crazy SOB)

Leave A Comment