Words like “positive”, “significant”, “negative” or “null” are common in scientific jargon, but are obviously misleading, because all results are equivalent in science – as long as they have been produced by sound logic and established methods. Yet literature surveys have extensively documented an excess of positive results.
Papers with negative results are less likely to be published and are not encouraged within the scientific community. Various causes are known for this bias against negative results in science. Positive results make scientists happy and negative make them disappointed. Papers reporting positive results attract more interest from the community at large and this is a positive outcome for the researchers, particularly in their career. When confronted with negative results, scientists may be tempted not to publish these results. What happens to these missing negative results? They are presumed to be completely unpublishable or are somehow turned positive through selective reporting, post-hoc, re-interpretation and alteration of methods, analyses and data.
Negative results are virtually inevitable, unless the entire hypotheses tested were true, experiments were designed and conducted perfectly and the statistical powers available were always 100% (which is very rare – it is usually much lower). There is no doubt that negative results produced by a methodological flaw, which can be corrected or not be published at all. And it is likely that many scientists select or manipulate their negative (maybe the sample was too small or too heterogeneous, some measurements were inaccurate) and in most circumstances this might be nothing more than a “gut feeling”.
An unavoidable confounding factor that needs to be considered here is the quality and prestige of academic institutions, which is intrinsically linked to the productivity of their researchers. Indeed, official rankings of universities often include measuring publication rates. Separating this quality of institution effect from that of the bias induced by pressures to publish is difficult, because the two factors are strictly linked the best universities are also the most competitive and thus presumably the ones where pressure to produce are highest.
A journal’s name recognition determines (for some audiences) the quality of the science published within its pages. Some Hopkins research labs break out the champagne when a paper is accepted in Nature, Science or Cell. Others celebrate more soberly. Still, publishing a paper in a top-tier journal is always a high-five moment – for the authors and for the lab. This leads to a host of troubling questions. Has publishing in the so-called “top three” publications become not just optional but obligatory? Does career progress – here and elsewhere – increasingly depend not on what is published, but where? And what criteria are editors using to determine which articles are published in those journals anyway?
The current trend in the scientific field forces scientists to create publishable results. From an analysis it was concluded that researchers report the positive results for their experiments more frequently than negative results. Since papers are accepted by the review committees of various journals only if they report positive results that support an experimental hypothesis, producing a bias against negative results. Negative results are somehow unpublished or somehow turned into positive results by selective reporting and post hoc reinterpretation and alteration of methods, analyses and data.
Mixed opinions are seen on the public side about negative results, some say that these negative results are interesting only if they relate to some hypothesis that is widely believed or at least in general circulation. Negative results provide more information on which the future research can be based. For every study that didn`t work or didn`t produce positive results there may be a very well planned study. Negative results are not the ones where nothing is found, but where the evidence suggests that the hypothesis is wrong, this must have as much validity as the positive result. Hence making these results hard to find hinders the scientific process in the end and could lead into an inefficient research.
Competition is encouraged in scientifically advanced countries because it increases the efficiency and productivity of researchers. The flip side of the coin however is the conflict between their objectivity and integrity, because success of a scientific publication partly depends on its outcome. Survey suggests that a competitive research environment decreases the likelihood to follow scientific ideals and increase the likelihood to witness scientific misconduct. This increasing pressure will lead to scientific bias. However, no direct research has been done to connect the pressure to publish and the bias in the scientific literature, so the existence and gravity of the problem are still a matter of speculation and debate.
1. Publish-or-perish: Peer review and the corruption of science: David Colquhoun guardian.co.uk
3. Do Pressures to Publish Increase Scientists' Bias? An Empirical Support from US States Data: Daniele Fanelli : INNOGEN and Institute for the Study of Science, Technology and Innovation (ISSTI), The University of Edinburgh, Edinburgh, United Kingdom
Written by Shalini P. Burra for The All Results Journals.