Several of the talks at the Summit might be lumped together under the heading “AI — when and how?”
The two main pathways were the synthetic approach, talked about by people like Juergen Schmidhuber, Ben Goertzel and Itamar Arel, and brain emulation, talked about by such people as Anders Sandberg, David Chalmers, and Ray Kurzweil. Kurzweil’s stance is well known: brain science is advancing at such a rate, and computer technology is advancing at such a rate, that by some foreseeable date we’ll know how the brain works and have the horsepower to emulate it. His analysis is in some sense an existence proof for AI in the not-too-distance future.
Ray puts the date for this at 2029; Itamar and Ben think the synthetic approach could work earlier if sufficient resources were applied. Itamar’s talk in particular was a nice overview of the state of the art in machine learning and other technologies with the argument that we have all the parts, we need only put them together. But at the moment we don’t have an existence proof for this path.
Today’s thought, though, has to do with an analysis by Chalmers on the potential of creating superintelligence. He dismissed brain emulation as a possibility, because all you get out of it is a human brain’s worth of intelligence. This leaves us without an existence proof for superintelligence, although our instinct is that a synthetic intelligence approach could get there.
I claim, though, that we do have an existence proof for superintelligence: it’s not humans, but human societies. Put a thousand (emulated) brains in a box, and crank up the clock speed to whatever you can. Build in all the communications substrate they might need, and turn them loose. You can try different forms or internal organization — literally, try them, experimentally — and give the internal brains the ability to mate electronically, have children, teach them in various ways. Some forms of human organization, for example the scientific community over the past 500 years, have clearly demonstrated the ability to grow in knowledge and capability at an exponential rate.
In what way could you argue such a box would not be a superintelligence? Indeed, some very smart people such as Marvin Minsky believe that this is pretty much the way our minds already work. And yet this “Society of Minds” would be a model we intuitively understand. And it would help us understand that, in a sense, we have already constructed superintelligent machines.