Mar 4, 2011

The things that go unpublished

Treatment trials, prevention trials, diagnostic trials, expanded access trials, screening trials and quality of life trials. Safe dosage range, identification of side effects, comparison to commonly used treatments, post-marketing studies, and of course the placebo comparison. Every new drug that hopes to get to market has to undergo it all. Eventually we should be able to know what every new drug can and cannot do. Ideally, only the effective drugs -- the ones that do exactly what they say on the label and nothing more --  make it to the end. In this process, a wealth of information is gained. Unfortunately, for the drugs that don’t make it all the way to the end, a lot of vital information is also lost.

A recent study entitled “Time to publication for results of clinical trials” has highlighted the need to publish the bad as well as the good from clinical trials, an effort supported by the World Health Organisation. Clinical trials showing a positive treatment effect, or those with important or striking findings, were much more likely to be published in scientific journals than those with negative findings. Researchers at the Cochrane Centre set out with the objective to investigate if publication of clinical trials was influenced by the result obtained (positive or negative) and to what extent does the significance of the outcome influences the publication. The more striking results get published quicker, and are also more likely to get published. Positive results in favour of the treatment hypothesis were published within 5 years, while the so-called “silent evidence” goes unpublished for many years. Trials with null or negative results were only published after about 6 to 8 years. Possibly, due to the fact that additional time is needed for follow-up trials and studies. This, of course, points out the way we view positiveness in our data. Often follow-up studies would be done to glean some positiveness to an otherwise negative result.

In the end, the wealth of information -- the systematic review -- on a particular drug or treatment will be incomplete, adversely affecting the evidence for decision and policy making. The importance of such a grand question has been known for some time. And such, making sure the bias in the science we do is put forward to create a clearer picture as possible.


Written by Dr. Charles Ebikeme for The All Results Journals.


NOTE FROM DAVID ALCANTARA: Please leave your best ideas about publication bias and how a journal of negative results can help to fight the bias below in the comments. Please try to write up something with some value and contribute to this conversation with an insight, a practice, or a resource that we can all use to create more value. Thank you!

4 comments:

  1. Negative results are not flashy, but they are very useful. Unfortunately publications are used to promote the reputation of the author and the publisher, not just the content. It is like advertising. If you make an i-phone, everyone knows your name. If you sell them, everyone line up outside your store; but who makes screws? What store would you drive across town to visit and then stand in line only for the privilege of pre-ordering screws. You can make a lot of money making and selling screws, but you won't get famous.

    Negative results are also more reliant on external information and context so they are harder to present than positive ones. A positive result comes with a built in context: "Any circumstance under which such positive results could be observed or replicated". A negative result needs to have the context explicitly defined, like: "In the following circumstance, no signal above background could be detected". This makes them less self-contained and harder to interpret correctly.

    One way to provide context is through the use of "complete" positive controls, such that the hypothesis is "Whenever the following conditions are observed, no evidence for X is obtained." Another way to do it is the context of a supporting review or survey paper that describes the landscape of competing models or methodologies. "X model (or X methodology) as proposed does not appear to be a good choice because of the following null results..."

    There is always some reason a researcher believes a positive result is possible/probable, or why a negative result is important/useful. When stated with appropriate context, null results seem little different than positive ones. I can't imagine why null results presented in context are considered less valuable. I enjoy my i-phone, but I *need* screws.

    ReplyDelete
  2. Thanks Stuart,
    you made a good point with the reputation. It is like the normal results (the ones that are at present being published, or positive) build the reputation up and the negatives don't. For me this is not the case. What happens if a researcher cannot publish anything in one year (because, unfortunately, didn't get positive results, the ones loved by journals)? That it means that the researcher has not been working at all? Sadly, this is what mayor funding agencies may think and getting grants could be much more difficult for him/her. This is a misrepresentation problem. The real work are not being published.
    Totally agree with you that you have to provide context when publishing, negative and positive results; I don't see why might be harder to contextualize negative results.
    And yes, everybody will benefit with having good "screws" (akas, negative result papers) for building their home (research) :)

    ReplyDelete
  3. Hi David,
    I agree with Stuart on this. While I like your thinking on this - and actually agree that it makes sense to find a way to publish negative results I feel it is much harder to do. Did you get a negative result because you did something wrong or because it is truly negative? Did you simply use an antibody that is not cross reactive with your protein or is that protein really not there? I think that for such a publication the importance of full documentation would be even more important than for positive results.

    That being said - it is hardly impossible. Good controls etc....go a long way to proving that the experiment was done properly. How many millions of dollars have been wasted by people all repeating the same thing because no one ever published a paper saying not to bother? There is definitely a need for this sort of information

    ReplyDelete
  4. Thomais Kakouli-DuarteMay 14, 2011 at 3:06 AM

    I landed at this exchange of thoughts accidentally folks; thank you all for bring up a very usual occurrence in research and for valuable and useful insights. I also noticed David's thinking about an actual journal of negative results. That would be radical and useful indeed.

    ReplyDelete