Jul 1, 2011

Fixing Science using the internet

It seems that some out there, like the All Results Journals, are on a crusade to “fix” science. The debate has begun and there are countless examples out there in the blogosphere.

Our first example comes from the blogs over at Discover magazine. Rhazib Khan, who has an excellent blog that you should bookmark, points out the problem with publication bias. Not just rife in the social science, or “any science that uses statistics”, but it is rife in all forms of science. Simply because no one gets rewarded for failure - or perceived failure. To this, there is a simple answer - to put these perceived failures on the same level as “positive” results.

Our second way to fix science is the way we publish. More specifically, filtering. You can read the original manifesto here. Essentially the problem, as he puts it, comes down to the peer review system. Now, we can debate all day the merits of the peer review system. It's not without its faults. He names some, which are quite pertinent to the debate. But getting rid of it all together seems a little bit extreme. Plus the fact that the primary reason we publish what we publish in the way we do is to do exactly what he doesn't like. Filter. Publication via peer review means the science you have done is of a suitable standard for the scientific community at large to critique or use as the basis for their research. A post publication filter makes very little sense. Especially if we're going to use social media as the delivery format.

He goes on to equate the two different forms of filtering (post- and pre- publication) to the difference between free markets and Soviet central planning. Is a free market publication system on the internet really a good idea? Because we all know the most popular things on the internet are the ones which are most important.

Prof Orzel over at Science Blogs doesn’t seem to think that filtering is the problem. Where ever you put the filter you’ll still need a form of “metric” to count a scientists worth. Something we’ve discussed on here before. However, he hit on something interesting when he referred to the “minimum publishable unit”:

“the lack of a referee-imposed threshold and the ease of publication would seem to encourage more incomplete publication, rather than less.”

This is true. Setting the barrier lower for publication - as publishing on a blog would do - would indeed lead to incomplete studies. But that’s not the interesting part. The interesting part is how it relates to publishing negative results. Often the reason why a study doesn’t make publication is because the results don’t really come out in a positive light. That is to say, the story formed at the end isn’t really cohesive in terms of you did not prove what you set out to do. You put the work in a drawer and hope to go back to it at a later date once you have enough data to show it in a positive light. The minimum publishable unit doesn’t exist with negative results. Perhaps some see a system that allows publication of negative results as a dumping ground for loose threads and ill-conceived ideas. That’s a common critique of “negative” results. But this is a misconception. The bar for the minimum publishable unit is not lowered with negative results, simply shifted.

Written by Dr. Charles Ebikeme for The All Results Journals. 

No comments:

Post a Comment