“The definition of insanity is doing the same thing again and again expecting a different result.”
Sound familiar? Something we’re all very familiar with in the trade as research scientists. Failure is something we are all used to on a daily basis. And yet, at the same time, it’s something we try to hide [1]. We hardly ever embrace it. True, we chronicle it in our lab books and use that failure to create something that works further down the line. But our failure is seen as something that we can live without. Never is that failure put on an equal par with our successes. No “veritas”, no truth, no knowledge, and no scientific story of any worth can be built from failure. Negative results are just that - negative. And this is the mindset we have been imparted and we impart to those underneath us that do science. More importantly, this is the mindset that builds careers and gets you your next job or grant. The “positiveness” of your data can only lead to a higher “impact.” And by impact I mean Impact Factor. And with the competitiveness of modern science and the grant-getting merry-go-round, who can blame us for striving to make as big a splash as possible.
Allow me to draw you a line from a time when a 2-year impact factor was designed to help librarians decide what journals to buy, to a time when these impact factors are used to decide a scientist’s worth. Much has been written about impact factors and their role in modern science [2,3]. But let’s face it, more and more we have become accustomed to this metric. Impact factor is currency. The perceived novelty and significance of your data will be rewarded. Higher impacts lead to more grants and lead to better jobs. But not necessarily to better science.
The negative results we often talk about here on The All Results Journals have more substance to them: you can’t make a particular chemical compound with this method, a particular anti-malarial in phase I clinical trial has no effect compared to current treatment... etc. This is the kind of high-level negative results The All Results Journals tries to highlight. But this is some-what against the grain. This isn’t how science is done these days. Or, at least, this isn’t how science is published. Few journals try and give special status to negativity [4,5]. So what is the solution? A revolution of some sorts is needed. Introducing the “negative” bias into the science we publish can only be a good thing. Placing negative data on a par with positive data will go some way to curbing the chase for the splash and for the high impact.
Science is rarely what it’s portrayed as. Science is never the clean-cut well rounded positive ball where everything works perfectly. Science is on-going. It never stops. Science needs to be balanced. In the end, providing scientists with a portal for a more balanced kind of science will go a long way to changing the mindset, and to try and get more from the science we do, and to try and free-up the science we do. The problem with negative results that don't fit their pre-conceived notions is that they're likely to be rejected as experimental failure. This needs to change. Negative results rarely fit the current publication model. Negative results will not get you noticed. Until now.
Allow me to draw you a line from a time when a 2-year impact factor was designed to help librarians decide what journals to buy, to a time when these impact factors are used to decide a scientist’s worth. Much has been written about impact factors and their role in modern science [2,3]. But let’s face it, more and more we have become accustomed to this metric. Impact factor is currency. The perceived novelty and significance of your data will be rewarded. Higher impacts lead to more grants and lead to better jobs. But not necessarily to better science.
The negative results we often talk about here on The All Results Journals have more substance to them: you can’t make a particular chemical compound with this method, a particular anti-malarial in phase I clinical trial has no effect compared to current treatment... etc. This is the kind of high-level negative results The All Results Journals tries to highlight. But this is some-what against the grain. This isn’t how science is done these days. Or, at least, this isn’t how science is published. Few journals try and give special status to negativity [4,5]. So what is the solution? A revolution of some sorts is needed. Introducing the “negative” bias into the science we publish can only be a good thing. Placing negative data on a par with positive data will go some way to curbing the chase for the splash and for the high impact.
Science is rarely what it’s portrayed as. Science is never the clean-cut well rounded positive ball where everything works perfectly. Science is on-going. It never stops. Science needs to be balanced. In the end, providing scientists with a portal for a more balanced kind of science will go a long way to changing the mindset, and to try and get more from the science we do, and to try and free-up the science we do. The problem with negative results that don't fit their pre-conceived notions is that they're likely to be rejected as experimental failure. This needs to change. Negative results rarely fit the current publication model. Negative results will not get you noticed. Until now.
References:
[1] http://www.nature.com.gate1.inist.fr/naturejobs/2010/101118/full/nj7322-467a.html
[2] The Misused Impact (sciencemag 10.1126/science.1165316)
[3] www.pnas.org/cgi/doi/10.1073/pnas.1016516107
[4] www.jnrbm.com
[5] http://www.acfnewsource.org/science/negative_results.html
[2] The Misused Impact (sciencemag 10.1126/science.1165316)
[3] www.pnas.org/cgi/doi/10.1073/pnas.1016516107
[4] www.jnrbm.com
[5] http://www.acfnewsource.org/science/negative_results.html
Written by Dr. Charles Ebikeme for The All Results Journals.
No comments:
Post a Comment