I think we all agree that scientific journals usually do not publish results unless they are statistically significant. That is enough important to consider publication bias in order to estimate the true magnitude of a particular effect.
A variety of techniques have
been developed for this goal. Some authors review publication bias correction
tools that assume selective reporting based on p-values (probability of obtaining a test statistic at least as
extreme as the one that was actually observed, assuming that the null
hypothesis is true).
In general, considering the
following thresholds:
- p<0.01 : very strong presumption against neutral hypothesis
- 0.01<p<0.05 : strong presumption against
neutral hypothesis
- 0.05<p<0.1 : low presumption against
neutral hypothesis
- p>0.1 : no presumption
against the neutral hypothesis
Usually scientists publish
significant effects (p<0.05)
and spurn (on file-drawer) the rest.
A new paper has been recently
published based on a p-curve where
the authors suppose that if the true effect of something is X, and you do a
bunch of studies, then statistical chance means that you'll get a range of
results arrayed along a curve and centering on X.
According to the published
study, when a studied effect is nonexistent, the p-curve is uniform, by definition. Moreover, for any given sample
size, the bigger the effect, the more right-skewed the expected p-curve
becomes. They say p-curve is more
precise when it is based on studies with more observations and when it is based
on more studies. Less obvious, perhaps, is that larger true effects also lead
to more precision. This occurs because the p-curve’s
expected shape becomes very right-skewed very fast as effect size increases,
reducing the variance in skew of the observed p-curves. The skewness of p-curve
and the statistical power of a test are very closely related. Both are a
function only of the effect size and sample size.
The p-curves have a known shape so just looking at the small section of
it allows you to estimate the size of the full curve. And this in turn allows
you to estimate the true effect size just as if you had read all the studies,
not just the ones that were published.
Although, to date, no technique
exists that can eliminate this bias, we must point out that they have shown
that the distribution of significant p-values can be analyzed to eliminate the
impact of selective reporting of studies on size estimation.
Original source
Written by Dra. Belén Suárez Jiménez for The All Results Journals
Original source
Written by Dra. Belén Suárez Jiménez for The All Results Journals
No comments:
Post a Comment