For a long time, SACSIS has been talking about publication bias, but maybe there are people today who do not really know what it means. Recently it an interesting article has been published that may answer some questions on this topic.
Publication bias occurs when the publication of studies depends on the nature and direction of the results, so that published studies may be systematically different from those of unpublished studies.
Published literature is the main source of evidence for making clinical and health-policy decisions. The number of published studies has increased dramatically over time but it has been reported that about 50% of completed studies may still remain unpublished.
Since the first identified article with the term “publication bias” in 1979, the number of references that are potentially relevant to publication bias has considerably increased (Figure 1) and this increase may reflect the increased awareness of publication and related biases.
Bias may be introduced intentionally or unintentionally, consciously or unconsciously, into the process of research dissemination. The dissemination profile of research may be influenced by investigators, study sponsors, peer reviewers, and journal editors. For Song et al. according to surveys of investigators, the main reasons for nonpublication of completed studies included lack of time or low priority (34.5%), unimportant results (19.6%), and journal rejection (10.2%). Therefore, the nonpublication of studies was usually due to investigators’ failure to write up and submit to journals when the results were considered to be negative or nonsignificant.
Publication bias will result in misleading estimates of treatment effects and associations between study variables. Here the consequences of publication bias are considered separately for basic biomedical research, observational studies, and clinical trials.
Results of basic medical research are often used to support subsequent clinical trials. If the results of basic research are falsely positive due to biased selection for publication, subsequent clinical trials may waste limited resources and fail to confirm the published results of basic studies.
Also, in clinical trials publication bias has a direct impact on patients’ and populations’ health due to when the relative efficacy of a treatment is overestimated because of publication bias, health resources can be wasted by purchasing more expensive interventions, instead of cheaper alternatives, without a corresponding improvement in outcome.
Methods of avoiding publication bias, by identifying and including unpublished outcomes and unpublished studies, are discussed and evaluated. These include searching without limiting by outcome, searching prospective trials registers, searching informal sources, including meeting abstracts and PhD theses, searching regulatory body websites, contacting authors of included studies, and contacting pharmaceutical or medical device companies for further studies.
The choice of strategy to reduce the risk of publication bias depends on whether the aim is to tackle entire sets of missing studies, or whether selective/incomplete reporting of data by authors is considered to be the primary problem.
Trial registration, a process by which details about the design and conduct of a clinical trial are published, is considered to have both scientific and ethical implications, particularly in light of item 19 in the Declaration of Helsinki, which states “Every clinical trial must be registered in a publicly accessible database before recruitment of the first subject”.
Another way of reducing publication bias could be if journal editors moved away from the policy of giving greater priority to articles that were subjectively perceived as having greater novelty or importance, or significant findings. In this sense, some open access journals as The All Results Journals encourage authors to submit reports of negative or unexciting results.
That is the way!
Written by Dr. Belén Suarez for The All Results Journals