A paper entitled ‘The Natural Selection of Bad Science’ has been making the news rounds recently. The central thesis of the paper is that bad scientific practices are a result of perverse incentives, such as rewarding research “impact” rather than investigating if said research methodology is sound. Labs which use bad methodology to get these “sexy”, but likely wrong results, transmit it to their students who continue the bad practices if they set up their own labs. The most interesting aspect if the paper is that there need not be direct cheating or collusion among the researchers in order to get bad science; an environment which rewards splashy results over good scientific practices is sufficient to propagate bad science. Like memes as originally envisioned by Richard Dawkins, scientific practices are a culturally transmissible trait with far-reaching consequences for wider society.
The paper is well worth the read. From my own analyses of the tiny research field I currently occupy, I can well believe that there are some serious problems in the scientific literature. Papers which end up in the high-impact journals, such as Science or Nature, get much attention and fanfare, propagating adoption of bad practices because they get results. It is easy to be bamboozled; papers which on the surface look pretty great, on deeper inspection actually hold some serious flaws. It is like enjoying a party until you find out that the punch has been spiked. No one else realises it, yet it is hiding in plain sight, as it were. Now, science as you may well have heard is supposed to be a noble pursuit of “truth”. Current incentives in the academic job market make it seem as if “truth” is synonymous with paper churn out. The inevitable bun-fight ensues. Careful introspection of research methodologies are now replaced with the apparently profound question of: “Is this going to get into Nature?”
“Sexy” results in science are what get a small snippet in the news and a big leg-up in the academic job market. If sexed-up papers had their own Robert Ludlum titles they would be along of the lines of: The Cellular Conspiracy, Bacterial Exchange: A Mystery, Cancer’s Lazarus, etc. Now, the papers themselves are usually a little more tamed, but they signal a similar intent: this is worth reading. And by “reading” they mean citations. This usually means that highly cited papers receive a much larger fan base, driving other research projects which could, in effect, be chasing a dead-end. Science has quite robust error-correcting machinery, but this is of little consolation to situations like failed drug experiments because previous papers found the wrong target.
What is the solution? The paper does not give any solid answers, preferring to mention that the problem is difficult to fix. It is not so easy to change incentives. Luckily, there are still researchers out there who believe in practising rigorous and sound science. Whether their traits will survive into the next generation is an open question.