Jamais Cascio has an article in the current Atlantic about how humans are getting smarter. This is the best article on the subject I’ve seen in the mainstream press, and better than most in the transhumanist corner of the web.
Cascio’s main point is that, as we’ve always done, we build our technology to make ourselves smarter. It doesn’t matter if it’s drugs or PDAs or Google, the technology (and the ideas it embodies) makes us effectively smarter. This has been true since the invention of writing, or longer.
Cascio has one point that transhumanists such as Michael Anissimov disagree with. Cascio writes:
My own suspicion is that a stand-alone artificial mind will be more a tool of narrow utility than something especially apocalyptic. I don’t think the theory of an explosively self-improving AI is convincing—it’s based on too many assumptions about behavior and the nature of the mind.
and Anissimov replies at length.
Gah! I’d like to hear more on this from other people of the same position, because I just don’t understand it.
Happy to oblige. The bottom line is that AI simply isn’t going to appear all at once in a single, anthropomorphic system. Intelligence is a huge, complicated mass of knowledge that is being invented one little piece at a time. A lot of the pieces are the very kind of thing that Cascio talks about in the article, which we are applying piecemeal to improve our own intelligence. Chances are that in a decade or so we’ll have enough of the pieces worked out that it will be possible to put them all together in system which evaluates, controls, selects, and otherwise manages them so as to act like an integrated, human-style intelligence. But that won’t be an easy task — and at the same time we humans will be using the pieces ourselves, and doing just the parts of the managing puzzle that we’re best at. Human-level intelligence is a moving target.
The key part of AI yet uninvented is the fluid, intuitive, estimating, connection-making, higher-level manager that controls the formal, boiled-down, automatable skills that form most current AI. But that’s exactly the form Cascio claims the human mind is moving toward (since we’re putting the rest on silicon). Machines have a huge advantage over human brains when it comes to hard, symbolic, calculation. Now I’m not among those who believe that the nebulous intuitive stuff can’t be done by computer, but I do think that to do it, you’re going to have to use some pretty brute-force methods and the machine-to-neuron advantage will shrink considerably.
Will pure machine intelligence pass us, individually? Almost certainly yes, because a wild-type human has a fixed amount of processing power, and the machines won’t have such limits. Will the machines surpass us as a civilization? No — because they’re part of civilization. We build machine intelligence specifically to make ourselves, collectively, smarter.