The Black Box Fallacy

Consider this marvelous story by Richard Feynman:

(watch it now, this won’t make too much sense otherwise)

Feynman and his friend John Tukey discover that they have completely different internal ways of thinking, or at least of counting, even though they are using the same words to talk about what it is they’re doing.

Consider two scientists working on some problem, or two engineers working on some design: In each case, the way that the person thinks will form an inductive bias that will help in some cases, but hinder in others, the ease with which the individual in question gains the insight necessary for the task.

If you broaden your scope to all human endeavor, the biases are surely even more varied than between scientists and engineers, who all think a lot alike already. This diversity is the glory of human thinking, but more specifically it is what makes human progress work. No individual human could invent or discover all the things that humans as a whole do, because the inductive biases that enable some insights prevent others.

Nowhere is this more obvious than politics: two people with the same overall goal of “promote the general welfare” will argue bitterly over the means and appear to be talking completely past each other. On the other hand, it’s even true in science: the adage “Science advances, funeral by funeral” has more than a little truth in it.

Now suppose you want to make a superintelligent AI. Which inductive bias should you give it, among all the ones humans have (and all the many many more possible ones)? You’d obviously prefer to give it some mechanism to apply different biasses to different problems. That is what happens in a marketplace, where people specialize, and in the scientific community, the “marketplace of ideas.” The only two really decent models we have of long-term self-improving learning machines, evolution and the scientific community, work by a process of self-organizing diversity, not a rigid hierarchy.

Proponents of Intelligent Design fail not because their theory is obviously derived from religious mythology, but because it has no explanatory power. “Nothing in biology makes sense except in light of evolution.” If there were a Designer, why would he have made all the crazy design choices what we see in living things? With evolution, they make sense. With a Designer, to gain the same understanding, we need to know how his mind works. Wrapping the process up in a personified black box explains nothing — all it does is leverage the average person’s cognitive biasses in such a way to hide the fact that it explains nothing.

The same thing is true, I claim, of anyone who talks about a superintelligent AI. There’s a tendency in the singularitarian community to draw a line, and put cockroaches on the left end end, humans in the middle, and point to something over on the right end and say “that’s what superintelligence must be like.” But that’s essentially the same dodge as the Intelligent Designer — the medieval Great Chain of Being has the same spectrum with the beasts of the field, men, angels, and God lined up the same way. It’s a very compelling way to think, because it lines up with our cognitive biasses so well — but in the end I believe it is essentially a religious one with little explanatory power.

Evolution created intelligence lots of times, in fairly unrelated species so we can assume separate evolution: octopus, birds, primates. Why didn’t a superintelligence evolve? The key question is, what’s more effective, given two brains’ worth of resources — one big brain or two small ones? At some point, evolution typically picks many smaller ones. As we build new substrates with different economics and speed/computation tradeoffs, the sweet spots may shift, but the basic question will remain: Which will be more efficient economically, one big brain or many small ones? My guess is that the answer is going to look something like a Zipf’s Law distribution, with prevalence being inversely proportional to size. The biosphere certainly works that way.

So the question is, how much “above” the human level will this spectrum go? Not too much, I think. The human mind is already heavily modularized — listen to Feynman above talking about the “talking machine” which is already being used, but another module can be used simultaneously. For the tasks that require superhuman intelligence, human-level minds in various patterns of organization and communication, with a variety of specializations and inductive biasses, are likely to work as well as any black box constructed some other way.

Of course, they may run faster … but then, so can you, with a substrate upgrade.

Leave a comment

0
    0
    Your Cart
    Your cart is emptyReturn to Shop