Singularity, part 5

This the fifth essay in a series exploring if, when, and how the Singularity will happen, why (or why not) we should care, and what, if anything, we should do about it.

Part V: AIs: smarter or just faster?

One of the primary phenomena invoked in the notion that the Singularity will come with an event horizon is that as self-improving AIs take off into higher intelligence, they will be not just like ordinary people with faster clock speeds, but they will be like smarter people. In other words, we shouldn’t expect a pack of dogs to invent the Theory of Relativity (or even plain old Newtonian mechanics) no matter how long they tried, and we’re the dogs when compared to the AIs.

It may be surprising to some in this venue, but in the world of mainstream AI the opposite point is the debatable one. That is, it’s not taken for granted that self-improving AI is even possible, much less predestined to take off into stratospheric reaches of smartness that leave us in the dust. The notion of a self-improving AI is known in some parts as the bootstrap fallacy — and as of even date, every actual AI system that tries to learn has hit a “glass ceiling” and run out of steam.

Of course, the notion that self-improvement is possible is represented as well. For example, there’s a mathematical model of a learning AI, in a tradition going back to its founders, that clearly can learn anything we would intuitively consider learnable. Unfortunately, this model isn’t remotely computationally feasible, so while it does give us a mental model of what an unlimited-learning machine could do, it doesn’t really help to build one.

To get further into the question of whether it’s actually possible to build one (and the corollary question of whether it’s possible for us to build one), it helps to look at models of unlimited-learning machines that actually exist. In this paper I looked at a couple of possibilities: biological evolution and the scientific community.

In both cases, the processes seem to be able to build on what has been learned before to accelerate learning thereafter — or at least not run out of steam as the search space gets bigger. We don’t completely understand how this works in the case of evolution, but in science there is a feedback through technology in the form of better instruments, better communication, better modelling tools, and so forth. It may or may not be that the individual scientist improves his own performance in the course of a normal life — but each innovation that does occur spreads through the community and improves the performance of many scientists.

This is of course like the way beneficial mutations, or propitious co-occurances of alleles, can spread throughout a population in evolution. Dawkins famously noted the similarity and coined the notion of memes — which may at some point yield some insight into one or the other of the processes.

In any case it is clear that in the case of biological evolution the individual is not (genetically) self-improving, and it’s debatable in the case of the scientists. However, in each case it is clear that the population as a whole is. More precisely, the gene pool or the stock of scientific knowledge is self-improving, with the population as a substrate.

The problem with the notion of runaway self-improvement is that it fails to make a distinction between two concepts of intelligence: the first is simply human IQ, and the second is the growing knowledge and sophistication of the scientific community.

These seem like the same dumber-to-smarter spectrum, and it’s natural to conflate them. But on closer inspection, they are not the same. IQ, for example, is generally quite stable over a person’s lifetime. The distribution of IQs in the population of scientists is probably fairly stable too — indeed, since there are so many more scientists now than in Galileo’s day, the average may perforce have shifted down a bit. Thus the IQ scale is not like the scale along which AIs can be expected to improve themselves, in the scientific community model.

This leaves us with the possibility, however, that there is some point on the IQ scale that is necessary to get into the game at all. John von Neumann clearly believed there was, in what he called the “complexity barrier:”

There is thus this completely decisive property of complexity, that there exists a critical size below which the process of synthesis is degererative, but above which the phenomenon of synthesis, if properly arranged, can become explosive, in other words, where syntheses of automata can proceed in such a manner that each automaton will produce other automata which are more complex and of higher potentialities than itself. (Theory of Self-reproducing Automata)

If von Neumann is right, there are two possibilities:

And that means that when we do build a self-improving AI, it will look a lot like the scientific community — but run faster. To state the opposite, you’d have to argue that there was a form of intelligence that was possible (a) to run on a computer, (b) as a program initially written by humans, that (c) could not be simulated by any conceiveable arrangement of intelligent humans running however fast. (I trust I don’t have to point out that the scientific community can simulate a Turing Machine …)

The scientific community in some sense isn’t composed of people, but of ideas. The humans under different circumstances could just as well have been knights errant, or seafaring traders, or hunters and gatherers. AIs will be composed entirely of human ideas, and everything they ever think of will be traceable back to us in direct line of memetic descent.

Now I will readily admit that if, in the fullness of time and Moore’s Law, someone built a brain consisting of 100,000 high-end human equivalents, all running 100 times faster than biological humans and connected by appropriate communications networks and other infrastructure, and put the whole business into the head of a robot, I’d call that a hyperhuman intellect. But it won’t understand anything, in principle, that those humans couldn’t have understood given enough centuries.

In some sense, the point is moot: No individual can understand all that the scientific community knows, anyway. What happens in the future was always going to be more and more fantastic, and understandable only in broad, vague generalities by current-day humans. With AIs it may happen faster, but it will follow the same track.

Take the graph of knowledge and capabilities as they would have increased in a purely human future. Adding AIs, you get exactly the same line — you just change the dates on the x-axis.

There is one proviso: a future in which we understand AI could well be different from one in which we don’t understand it. It seems possible that the knowledge of how to build a formal, mechanical system that nevertheless exhibits common sense, could revolutionize the effectiveness of our corporate and political structures (which currently have about as much common sense as a lazoon) — whether we built physical robots or not.

Leave a comment

0
    0
    Your Cart
    Your cart is emptyReturn to Shop