The Black Box Fallacy

The Black Box Fallacy

Consider this marvelous story by Richard Feynman:

(watch it now, this won’t make too much sense otherwise)

Feynman and his friend John Tukey discover that they have completely different internal ways of thinking, or at least of counting, even though they are using the same words to talk about what it is they’re doing.

Consider two scientists working on some problem, or two engineers working on some design: In each case, the way that the person thinks will form an inductive bias that will help in some cases, but hinder in others, the ease with which the individual in question gains the insight necessary for the task.

If you broaden your scope to all human endeavor, the biases are surely even more varied than between scientists and engineers, who all think a lot alike already. This diversity is the glory of human thinking, but more specifically it is what makes human progress work. No individual human could invent or discover all the things that humans as a whole do, because the inductive biases that enable some insights prevent others.

Nowhere is this more obvious than politics: two people with the same overall goal of “promote the general welfare” will argue bitterly over the means and appear to be talking completely past each other. On the other hand, it’s even true in science: the adage “Science advances, funeral by funeral” has more than a little truth in it.

Now suppose you want to make a superintelligent AI. Which inductive bias should you give it, among all the ones humans have (and all the many many more possible ones)? You’d obviously prefer to give it some mechanism to apply different biasses to different problems. That is what happens in a marketplace, where people specialize, and in the scientific community, the “marketplace of ideas.” The only two really decent models we have of long-term self-improving learning machines, evolution and the scientific community, work by a process of self-organizing diversity, not a rigid hierarchy.

Proponents of Intelligent Design fail not because their theory is obviously derived from religious mythology, but because it has no explanatory power. “Nothing in biology makes sense except in light of evolution.” If there were a Designer, why would he have made all the crazy design choices what we see in living things? With evolution, they make sense. With a Designer, to gain the same understanding, we need to know how his mind works. Wrapping the process up in a personified black box explains nothing — all it does is leverage the average person’s cognitive biasses in such a way to hide the fact that it explains nothing.

The same thing is true, I claim, of anyone who talks about a superintelligent AI. There’s a tendency in the singularitarian community to draw a line, and put cockroaches on the left end end, humans in the middle, and point to something over on the right end and say “that’s what superintelligence must be like.” But that’s essentially the same dodge as the Intelligent Designer — the medieval Great Chain of Being has the same spectrum with the beasts of the field, men, angels, and God lined up the same way. It’s a very compelling way to think, because it lines up with our cognitive biasses so well — but in the end I believe it is essentially a religious one with little explanatory power.

Evolution created intelligence lots of times, in fairly unrelated species so we can assume separate evolution: octopus, birds, primates. Why didn’t a superintelligence evolve? The key question is, what’s more effective, given two brains’ worth of resources — one big brain or two small ones? At some point, evolution typically picks many smaller ones. As we build new substrates with different economics and speed/computation tradeoffs, the sweet spots may shift, but the basic question will remain: Which will be more efficient economically, one big brain or many small ones? My guess is that the answer is going to look something like a Zipf’s Law distribution, with prevalence being inversely proportional to size. The biosphere certainly works that way.

So the question is, how much “above” the human level will this spectrum go? Not too much, I think. The human mind is already heavily modularized — listen to Feynman above talking about the “talking machine” which is already being used, but another module can be used simultaneously. For the tasks that require superhuman intelligence, human-level minds in various patterns of organization and communication, with a variety of specializations and inductive biasses, are likely to work as well as any black box constructed some other way.

Of course, they may run faster … but then, so can you, with a substrate upgrade.

By | 2017-06-01T14:05:24+00:00 August 26th, 2009|Complexity, Machine Intelligence, Nanodot|7 Comments

About the Author:


  1. […] Read more … […]

  2. […] The Black Box Fallacy […]

  3. […] As I pointed out yesterday, the internals of a super-AI are likely to look more or less like some organization of human-level […]

  4. […] same seems almost certain to hold with AI. The huge majority of AIs are going to be just about human level in raw intelligence, because […]

  5. ESP | Everything News Portal! September 4, 2009 at 8:09 am - Reply

    […] or just the corporations, or the legal or the financial sectors, or even the Internet.  It’s a mistake to anthropomorphize it. It’s the self-organizing system consisting of all of them.  And it is quickly, not slowly, […]

  6. Tim Tylerr September 5, 2009 at 12:30 pm - Reply

    Check the sperm whale brain. Brains on land are more strongly selected against than brains in water. Also, check the scale of Google’s data centres – and think how early on we are. Some intelligent machines seem likely to be vast.
    [Sure — so is the internet. But drawing a line around it and saying “the internet did this” and “the internet did that” is a hindrance, not a help, to actually understanding what’s going on. -jsh]

  7. Tim Tylerr September 6, 2009 at 10:37 am - Reply

    I am more inclined to the view that the internet is the beginning of a planetary nervous system, and modeling it as such is genuinely useful – or at least a lot more useful than it is misleading.

    Anyway, that seems a bit beside the point. This post claims that synthetic intelligence won’t go far beyond human level? The evidence? Zipf’s Law based on animal brains so far. It is hardly a compelling case for a low ceiling. I am more impressed by things like the rise in brain-power over time since the Cambrian era, the radical success of the brainiest land creatures, and the formation of huge collective organism-like agents – in the form of companies (and their associated data-centres).

Leave A Comment