“The decline effect”: Why transparency in science is a good thing

Johah Lehrer wrote a fascinating article (The Truth Wears Off) in The New Yorker last month about the “decline effect” in scientific research, which he describes as the tendency of initially promising results to fade over time. In the process of exploring this tendency, Lehrer discusses some of the well-known processes that can distort the scientific enterprise, including selective reporting of results by scientists and the publication biases of peer-reviewed journals. All in all, I thought it was an interesting, fair, thought-provoking piece that gives a general audience some sense of how our human foibles can distort even the most objective of endeavors.

And so I was a bit surprised to see Dr. John M. Grohol, whom I greatly admire and whose blogging I’ve been appreciating for years, dismiss Lehrer’s piece as “a somewhat dumbed-down and sensationalistic article” (Is Science Dead? In a Word: No). Dr. Grohol credits biology professor and science blogger PZ Myers with publishing the best rebuttal (Science is not dead) to Lehrer, which includes a nice list of possible explanations for why “statistical results from scientific studies that showed great significance early in the analysis are less and less robust in later studies”:

Regression to the mean: As the number of data points increases, we expect the average values to regress to the true mean…and since often the initial work is done on the basis of promising early results, we expect more data to even out a fortuitously significant early outcome.

The file drawer effect: Results that are not significant are hard to publish, and end up stashed away in a cabinet. However, as a result becomes established, contrary results become more interesting and publishable.

Investigator bias: It’s difficult to maintain scientific dispassion. We’d all love to see our hypotheses validated, so we tend to consciously or unconsciously select reseults that favor our views.

Commercial bias: Drug companies want to make money. They can make money off a placebo if there is some statistical support for it; there is certainly a bias towards exploiting statistical outliers for profit.

Population variance: Success in a well-defined subset of the population may lead to a bit of creep: if the drug helps this group with well-defined symptoms, maybe we should try it on this other group with marginal symptoms. And it doesn’t…but those numbers will still be used in estimating its overall efficacy.

Simple chance: This is a hard one to get across to people, I’ve found. But if something is significant at the p=0.05 level, that still means that 1 in 20 experiments with a completely useless drug will still exhibit a significant effect.

Statistical fishing: I hate this one, and I see it all the time. The planned experiment revealed no significant results, so the data is pored over and any significant correlation is seized upon and published as if it was intended. See previous explanation. If the data set is complex enough, you’ll always find a correlation somewhere, purely by chance.

I have no problem deferring to both Myers’s and Grohol’s expertise in science, and I find them each to be thoughtful writers, but I think they both significantly misread Lehrer. For instance, Grohol inaccurately states that “what Lehrer failed to note is that most researchers already know about the flaws he describes”. Lehrer most certainly did not fail on that account, as he makes clear in his concluding paragraph:

And this is why the decline effect is so troubling. Not because it reveals the human fallibility of science, in which data are tweaked and beliefs shape perceptions. (Such shortcomings aren’t surprising, at least for scientists.) And not because it reveals that many of our most exciting theories are fleeting fads and will soon be rejected. (That idea has been around since Thomas Kuhn.) The decline effect is troubling because it reminds us how difficult it is to prove anything.

Myers likewise implies that Lehrer is making a “fuss” out of problems that scientists already know about, and he assures his readers that everything is under control, that “science works [and] that’s all that counts.” As Myers sees it, Lehrer is “overselling of the flaws in the science” and ultimately offers us a conclusion that is “complete bullshit.” The bulk of the commenters on Myers’s post are even more damning of Lehrer, many implying that he’s basically an idiot who should be ashamed of himself for providing fodder for the anti-science/anti-intellectual movement that many see as a growing problem in the United States.

I think that taking a “defender of science” stance is to miss the main point of Lehrer’s piece. The article was published in The New Yorker, to a general audience who most definitely does NOT know about all the biases and corrupting influences that Lehrer, Myers, and Grohol each mention in their respective pieces. The scientific method is just a tool, and a fine one at that. But the way science is actually done in our society has become increasingly corrupted by special interest groups and the profit motive. In this sense science is no different than other tools we have at our disposal, like our system of democracy, our free press, the internet, guns, etc. It’s not being anti-anything to bring attention to ways our tools can be and are being misused. It’s not anti-American to point out the ways our political system has become corrupted, and it’s not anti-science for Lehrer to highlight the ways in which scientists often fall short of optimal objectivity.

Lehrer published a follow-up article (More thoughts on the decline effect) in which he quotes a critique from Dr. Robert Johnson of Wayne State Medical School:

Creationism and skepticism of climate change are popularly-held opinions; Lehrer’s closing words play into the hands of those who want to deny evolution, global warming, and other realities. I fear that those who wish to persuade Americans that science is just one more pressure group, and that the scientific method is a matter of opinion, will be eager to use his conclusion to advance their cause.

I think this gets right to the heart of the matter and explains why, in my opinion, so many scientists have been inclined to dismiss Lehrer. Yes, anti-intellectual types might seize on Lehrer’s piece to further their agendas, but they’re going to find support for their lunacy one way or another. The fact of the matter remains: transparency in science is a good thing. As Lehrer reminds us:

We know science works. But can it work better? There is too much at stake to not ask that question. Furthermore, the public funds a vast majority of basic research—it deserves to know about any problems.

Science may be the best tool we have for advancing knowledge, but that doesn’t mean we should trust, on face value, what’s reported in the media as science, nor should we blindly accept the conclusions of scientific organizations and authorites. The scientific method may be just fine, but like all professions and institutions in our society, money and ego are becoming increasingly corrupting influences. Corruption has to be exposed in order to be dealt with constructively. As Lehrer concludes (and I think both Grohol and Myers would agree):

The larger point, though, is that there is nothing inherently mysterious about why the scientific process occasionally fails or the decline effect occurs. As Jonathan Schooler, one of the scientists featured in the article told me, “I’m convinced that we can use the tools of science to figure this”—the decline effect—“out. First, though, we have to admit that we’ve got a problem.”