Learning from science

There’s a really nice article at Wired about Kevin Dunbar’s research how science is really done and how often scientists get data they didn’t expect.

Dunbar knew that scientists often don’t think the way the textbooks say they are supposed to. He suspected that all those philosophers of science — from Aristotle to Karl Popper — had missed something important about what goes on in the lab. (As Richard Feynman famously quipped, “Philosophy of science is about as useful to scientists as ornithology is to birds.”) …

Dunbar brought tape recorders into meeting rooms and loitered in the hallway; he read grant proposals and the rough drafts of papers; he peeked at notebooks, attended lab meetings, and videotaped interview after interview. …

Dunbar came away from his in vivo studies with an unsettling insight: Science is a deeply frustrating pursuit. Although the researchers were mostly using established techniques, more than 50 percent of their data was unexpected. (In some labs, the figure exceeded 75 percent.) “The scientists had these elaborate theories about what was supposed to happen,” Dunbar says. “But the results kept contradicting their theories. It wasn’t uncommon for someone to spend a month on a project and then just discard all their data because the data didn’t make sense.”

The real world, it turns out, is a messy place, even in a completely controlled laboratory.  The job of a scientist, after all, is to abstract clean understandable rules and regularities away from the messiness.

But consider: information theory tells us that the amount of information a signal contains depends on how unexpected it is.  A signal consisting of a string of 0’s that we know is going to be a string of 0’s tells us nothing at all, and conveys no information.  So we should hope that the data from an experiment will be unexpected.

The import of the article, and of Dunbar’s research, lies in how the new information is used.

“The scientists were trying to explain away what they didn’t understand,” Dunbar says. “It’s as if they didn’t want to believe it.”
The experiment would then be carefully repeated. Sometimes, the weird blip would disappear, in which case the problem was solved. But the weirdness usually remained, an anomaly that wouldn’t go away.
This is when things get interesting. According to Dunbar, even after scientists had generated their “error” multiple times — it was a consistent inconsistency — they might fail to follow it up. “Given the amount of unexpected data in science, it’s just not feasible to pursue everything,” Dunbar says. “People have to pick and choose what’s interesting and what’s not, but they often choose badly.” And so the result was tossed aside, filed in a quickly forgotten notebook. The scientists had discovered a new fact, but they called it a failure.

Now of course most of the time the new fact is something like “the particular bleach processing used to make this particular filter paper produces surface irregularities on the fibers that have an unexpected interaction with this particular protein when prepared this particular way”, a fact that will never be of use to anyone who’s not trying that particular experiment.  So most of the time it is the right thing for scientists to do to ignore the anomalous results and redo the experiment with different but “equivalent” equipment (or whatever).

75% anomalous results represents a huge information stream in information-theoretic terms.  But it’s mostly noise.  So scientists have filters in their minds to deal with it — as do we all (read the rest of the article for a little neuroscience about that).  The filters explain why “normal science” can proceed so long in the face of anomalies before a Kuhnian paradigm shift occurs.  It’s a perfectly reasonable bias to assume that your existing theory, that has worked in the past, is right and the contradictory evidence is noise.  It usually is.

But if the bias of your filters somehow gets set by something else — a political belief, for example — the fact that the filters control so much of what you see can steer you wrong a lot faster than you think.

In a perfect world, whenever someone did an experiment, all the data would be put online, accessible to anyone who cared to look, instead of filed away in a “quickly forgotten notebook”.  In the 20th century that would have been a utopian dream; but today, it’s possible, and tomorrow, it should be relatively easy.

Imagine a world where, say, just 1% of today’s MMORPG players spent their time and efforts crawling through lab records, analysis programs, and satellite feeds — gleaning not virtual gold, but scientific truth.

Leave a comment

    Your Cart
    Your cart is emptyReturn to Shop