Nov 25, 2010

The problem of Positive-Outcome bias

One reason that so-called negative studies often fail to be published may be "positive-outcome bias" in peer review, said researchers who conducted a randomized trial.

Presented with a fictitious study showing that one treatment was superior to another, peer reviewers at two orthopedics journals were significantly more likely to recommend publication than when given an otherwise identical manuscript that indicated no difference between treatments, according to Seth Leopold, MD, and colleagues at the University of Washington in Seattle.

Moreover, they reported in the Nov. 22 issue of Archives of Internal Medicine, it appeared that reviewers gave "heightened scrutiny" to the no-difference study. Methods in the two versions of the manuscript were identical, but reviewers gave lower grades to the methodological quality in the one indicating no difference between treatments.

Leopold and colleagues concluded that their findings constitute "evidence of positive-outcome bias" in manuscript review and therefore in publication.

"To the extent that positive-outcome bias exists, it would be expected to compromise the integrity of the literature in many important ways, including, but not limited to, inflation of apparent treatment effect sizes when the published literature is subjected to meta-analysis," they wrote.

In the study, the researchers randomly sent one of the two nearly identical manuscripts to 102 reviewers for the American edition of the Journal of Bone and Joint Surgery (JBJS) and 108 reviewers for Clinical Orthopaedics and Related Research (CORR). The editors of these journals when the study was conducted were investigators in the study.

Reviewers for these journals were informed beforehand that, as part of a study of peer review, they might receive a manuscript for review and they could opt out if they wished. Those not opting out were not told which manuscript was part of the study nor were they informed about the study's aims.

The manuscripts themselves reported a randomized trial of surgical antibiotic prophylaxis in which all details were identical except for the primary outcome result and the corresponding conclusions. One version indicated that an particular antibiotic regimen was significantly superior to another, whereas the other reported no difference.

Also, five errors were intentionally included in the manuscripts, including two mathematical mistakes, two erroneous reference citations, and switching of results in a data table.

Leopold and colleagues wrote that the manuscripts "represent[ed] an extremely well-designed, multicenter, surgical, randomized controlled trial." They also noted that the experiment took place before public trial registration became mandatory at these journals, so the fabricated study's absence from would not have raised suspicions.

The positive version of the manuscript drew recommendations to publish from 97.3% of the reviewers at both journals, with little difference between them.

Only 80.0% of reviewers recommended publishing the no-difference version (P<0.001). The disparity was most pronounced at the JBJS, where 71.2% of reviewers favored publication (versus 98.0% for the positive manuscript, P=0.001).

At CORR, 89.6% of reviewers indicated the no-difference manuscript should be accepted (versus 96.7% favorable toward the positive version, P=0.28).

At both journals, reviewers were significantly more likely to detect the deliberately placed errors in the no-difference version of the manuscript. The average was 0.85 overall in the no-difference version versus 0.41 for the one with positive outcome (P<0.001). CORR reviewers were slightly more adept at picking up errors in both versions relative to their JBJS counterparts, but the gap in detection rates for the two versions was the same at both journals.

The review process at both journals also involved scoring the quality of methods. JBJS reviewers downgraded the methods in the no-difference manuscript significantly (mean 7.66 versus 8.68, P=0.005).

At CORR, there was a nonsignificant trend toward lower methods scores for the no-difference version (mean 7.38 versus 7.87, P=0.22).

But the overall mean scores at both journals combined remained significantly higher for the manuscript with the positive outcome.

Leopold and colleagues noted that other forces besides reviewers' bias may favor publication of positive-outcome studies, including an increased likelihood that such studies will be submitted in the first place.

"If so, then that, along with the evidence identified in this experimental study, highlights the importance of sensitivity to this issue during peer review," they argued.

They suggested that journals should do more to foster publication of high-quality studies with negative results -- encouraging authors to submit them and giving them higher priority for publication, perhaps in nontraditional forms such as online appendices.

Limitations to the study included differences in the review processes at the two journals and the possibility that some reviewers detected the fabrication.


No comments:

Post a Comment