The evolution of science moves, in Kuhn’s famous theory, not in a smooth accretion of knowledge but in a series of punctuated equilibria. This means that before a paradigm shift happens, there is an overhang where the majority of scientists believe something that the mojority won’t a scientific generation later.
Thomas Bouchard, the psychologist who invented and championed the method of twin studies (which has overturned a number of conventional wisdoms about inheritance over the past 40 years) is retiring and was honored by among other things an interview in Science. This was picked up and commented on in several blogs, including Nicholas Wade in the NY Times, Stephen Hsu in Technology Review, and Robin Hanson in Overcoming Bias.
Academics, like teenagers, sometimes don’t have any sense regarding the degree to which they are conformists.
We still have whole domains we can’t talk about.
If the brightest minds on Wall Street got suckered by group-think into believing house prices would never fall, what other policies founded on consensus wisdom could be waiting to come unraveled?
What’s wrong with consensuses is not the establishment of a majority view, which is necessary and legitimate, but the silencing of skeptics.
Most groups have a long queue of contrarian ideas they are neglecting. What groups need are ways to get them to devote more resources to this entire contrarian set, especially the true parts of the set.
The problem, as Hanson points out, is that you, or indeed the scientific community, simply don’t have the resources to do a thorough investigation of all the crackpots who claim that Einstein was wrong, or that ESP or astrology works, or whatever? But if you ignore all the contrarians, you’ll miss Wegener and continentinal drift, Bouchard and his heritability, Drexler and his nanomachines, and indeed Einstein himself, who was pretty contrarian in his day.
Volumes could be and have been written on the subject, but I have one modest proposal: look for sources of bias in scientists’ thinking and correct for it using perfectly standard statistical means. Here’s an example:
We know that political orientation biases scientists’ view of even scientific questions, much less ones of policy. There is no reason that there should be a strong correlation between the belief that cloud-based radiative climate forcing feedback is positive, and the belief that nationally-administered health care plans are more efficient than privately-administered ones, but there is. Thus we can take as a working assumption that political views bias scientific judgement.
Now consider the recent Pew survey of the attitudes of scientists and the public on various issues. While assuming scientists are biassed, let’s also make the assumption that they know more about scientific issues than the general public does. In fact let’s completely ignore the public’s view on the issues. We want to remove a bias, not measure public ignorance.
Now Pew handily gave us a calibration yardstick: they divided scientists, and the public into three groups: Democrats (35% of public, 55% of scientists), independent (34% of public, 32% of scientists), and Republicans (23% of public, 6% of scientists). Now we assume each population is normally distributed, and set the tails of the distribution equal (i.e. make the breakdowns line up). This allows us to solve for the mean and standard deviation of scientists on the public political scale, which we assume is a continuous spectrum. The scientists’ mean comes out to be -0.484 sigma on the public scale (left-leaning is more negative), and their sigma is 0.87 of the public’s.
We can now correct for political orientation in the sense that we can calculate how many scientists, if scientists were distributed politically the way the general population is, would agree with the Pew questionaire’s statements. Here they are:
- Humans and other living things evolved: 69% (huh?? but see below)
- The earth is warming due to human activity: 65%
- Animals should be used in research: 96% Notice that the “no’s” on this question fall on the left tail of the distribution, instead of the right.
- Govt should fund stem cell research: 79%
- Build more nuclear power plants: 83% (again “no’s” on the left)
- All parents should be required to vaccinate their children: 62%
There are two cases where the correction intensified the scientists’ agreement — animal research and nuclear power. Nuclear power is the surprise, because it represents a flip in prevailing attitudes. Animal research and vaccination are the only cases where the scientists’ views fall to the right of the public’s, and vaccination is the only one where the correction causes the difference. Even so the “corrected scientist” and the public agree closely, 62 to 69%.
More than one in three corrected scientists are skeptical about global warming claims, which seems reasonable given the state of knowledge in the field. Except for vaccination, this is the question where the public and corrected scientist are in closest agreement. In fact, I’d say that my subjective guesses about the various issues (when recast as scientific questions, e.g. are nuclear plants safe and economical) match the corrected scientists’ (with percentages as probabilities) right down the line, with one big exception.
The really crazy one is evolution. It is susceptible of an explanation, however, which I hope doesn’t sound too much like special pleading. On all the other questions, the public was essentially agnostic, being split roughly 50-50. (Except vaccination, where, as noted, the public agreed well with the corrected scientist). But on evolution, only 32% of the public agrees! Thus there remains a complete population inversion on the evolution question — a clear majority of the public disagrees with a clear majority of corrected scientists. No other question exhibits this phenomenon. Thus I claim there is a reason to reject the correction in this case. We have to view it as an abject failure on the part of science education in the US. The paradigm shift in evolution happened 150 years ago, but the entrenched religious view has hung on, more successfully than one might have thought.
So what’s the next paradigm shift? From this list, probably nuclear power. After that, perhaps, a loosening of political correctness in climatology. But the technique could be used for a much broader set of questions, and a much more well-developed model of political (and religious) bias. And then, who knows?