Written by: Chris Waters
Primary Source: Watershedding
A recent article from the economist, “Trouble at the lab” published 10/19/2013, claims that the scientific process is failing due to a number of factors such as the drive to publish in high quantities, the lack of publication of negative data, improper statistical analyses, and poor peer review. This failing, The Economist claims, is leading to large amounts of irreproducible findings becoming part of the science canon. Although I doubt you will find any scientist who will state that errors or mistaken conclusions are never published (and that scientist would be dead wrong), I am sure that most of us in the field strongly disagree with the conclusions of this article. This article relies on three main arguments:
1. Scientific findings that are currently being questioned
The article starts with the concept of priming, a recent tenant of psychology (not my area) proposed a decade ago that is now coming under question due to irreproducibility. In addition, two recent studies by Amgen and BayerHealth are described that showed only a low percentage of “seminal” studies were reproducible. The article also cites a 2010 Science paper that proposed a genetic basis for longevity that was retracted a year later because “Other geneticists immediately noticed that the samples…had been treated in a different way…” as an example of incorrect science being published. But are not all of these points evidence supporting that the field is working and self-correcting? The public needs to realize science is complicated and takes times. Self-correction will never be immediate but rather a process over many years.
2. Journals and peer reviewers are not doing their job
This article describes a recent study by John Bohannon from Harvard that submitted an intentionally flawed paper to 304 open-access journals in which 157 of these journals accepted the paper, often without significant peer review as evidence that the peer review process is failing. This particular study was targeted at many “predatory” journals. These types of journals are not interested in scientific quality, but merely try to lure authors into quick publications to collect hefty publication fees. It is quite misleading for the authors of “Trouble at the lab” to imply these journals are at all representative of the peer review process in established, respected journals. Moreover, the Bohannon study is itself evidence that the scientific community is self-policing. In fact, the Bohannon study found that PLOS One, the largest open access journal in the world, rejected the flawed paper and provided some of the most detailed and accurate reviews. Yet, the authors of “Trouble at the lab” cite PLOS One’s 50% rejection rate as further evidence of a flawed peer review system. It is in fact quite the opposite! If 50% of all papers to PLOS One, which is considered an “easier” journal to publish in, are rejected, imagine how difficult it is to publish in more established journals. I speak from experience-believe me.
3. A statistical argument that statistics are flawed (irony?)
Discolosure-I am not a statistician, but I’ll try to present the article’s main argument. The Economist argues that at the standard significance threshold of 5%, meaning that the results observed only occur by chance 1 in 20 times, many of the conclusions are incorrect. This is based on their estimate that only 10% of hypotheses are correct (the basis for this estimate is not at all clear). Therefore, with a power of 0.8 (i.e. 2/10 hypotheses are not supported due to chance), then one finds 8% correct hypotheses and 4.5% incorrect hypothesis (the 90% wrong hypotheses multiplied by 5%). In other words, a third of hypotheses are incorrect due to chance.
The major flaw in this argument is the assumption that a hypothesis is deemed “correct” based on one result with a 5% significance finding. Wouldn’t that be nice if I could publish a paper with only one figure-THE experiment showing my model is correct! Rather, in good publications, multiple experiments are done to test one hypothesis from many different angles. In fact, a paper from my lab that was just published accepted last week in Molecular Microbiology has 14 data figures! Therefore, if an average paper uses three pieces of evidence to support one hypothesis with each having a confidence of 5% then the probability that the observations occurred by chance would be 0.05*3 or 0.000125. Even then, good scientists do not state their hypothesis is “correct”, but rather “supported” by the current data. On top of that, ideas are not fully accepted into the science lexicon until they have been repeated by others in different settings, further adding to the rigorous bar science must cross before becoming a widely accepted hypothesis. The cream rises to the top and the rest falls by the wayside.
Bottom line: This article should be rejected
I am concerned that this misleading piece by The Economist will be misinterpreted and used as ammunition in the “War on Science” that we see play out in public forums. Much of the evidence provided in this article supports that scientific process is working. No one denies that papers are published with incorrect conclusions. But this is because we are studying complex processes in greater and greater detail. As much as we would wish it, science does not occur as a relatively linear set of experiments from point A to point B, but rather as a tangled web of inquiry from point A to point Z, sometimes leaping forward, sometimes retreating backward, but ultimately increasing our understanding of the world around us. From my vantage point, “the lab” is doing just fine and will continue to drive discovery and innovation, providing solutions to society’s biggest problems.
Latest posts by Chris Waters (see all)
- Our Reseach in Simple Words - June 2, 2015
- MIRA MIRA on the wall, who’s the fairest funding system of all? - July 23, 2014
- NIH Funding Model-Fund People not Projects - January 20, 2014