Written by: Tina Qin
Primary Source: Digital Scholarship Collaborative Sandbox
At the ACS (American Chemical Society) National Meeting in Denver last month, I attended symposium where faculty and librarians talked about unreliability and irreproducibility of scientific publications. The discussion brought a different angle for us to understand published research. A fundamental principle of science methods is that the experiment should be able to repeat with the same results as those described in published work from another laboratory. However, the reality is not the case from time to time. How often do scientists tell the truth in their publication? Why do many procedures prove not to be reproducible? Why are an increasing number of articles being withdrawn as a result of misconduct? These questions raise our concern about the evaluation system in science publishing. If the published research is untrue, the authors, publishers and research institutes will be responsible, including investigation, evaluation, article retractions and other actions depending on individual case. Scientific misconduct is one of the major causes of article retractions. The number of articles retracted from fraud and error has risen dramatically in the last decade. However, how many publications containing misconduct research have not been found? Without knowing it, the scientists try to reproduce the fabricated experimental results, until the misconduct is revealed.
A growing number of researchers worry thus far many findings in research journals cannot be replicated, even for top journals in the field. One approach to measure quality of a journal is by impact factors, which is to count how many times scientists cite its article. Publishing in journals with high impact factors feed grants, awards and popularity of the research. However, impact factor can also be manipulated. In the Impact Factor Game, the author discussed how the impact factor is determined and he concluded our current science is evaluated by an “unscientific, subjective, and secretive” process. To derive from that, the impact factor is calculated from citations to all articles in a journal, the number “cannot tell us anything about the quality of any specific research article in that journal, nor of the quality of the work of any specific author.”
The complex of the system and technique can also lead to irreproducibility results in the laboratory. One obvious example is the biomedicine experiments: the biological noise or random fluctuation naturally occurs frequently despite the experimental design and execution. In the physical science, such as chemistry, pollution of the reagents, local changes in temperature or pH can easily lead to irreproducible experimental results. Although all the science studies have a risk of generating irreproducible results, there are great differences of basic and clinical studies, according to Irreproducible Experimental Results. In basic science experiments, there is a greater degree of control of experimental environment and smaller sample size than clinical trial. The basic studies may be affected by a greater biological noise than clinical trials. The technical aspect of conducting experiments is another major reason of irreproducible results. Although the authors of publications thrive to include the important parameters in their experimental procedures, subtle details may cause a different result. The laboratory environment and proficiency of staffs may be other sources of irreproducibility.
– Tina Qin
Latest posts by Tina Qin (see all)
- Can we reply on our scientific publications? - April 10, 2015
- Open Access in Chemical Sciences - January 23, 2015
- An Introduction to Electronic Laboratory Notebooks - October 18, 2014