|This article is of interest to the following WikiProjects:|
|A summary of this article appears in Null hypothesis.|
Publication bias is a well-documented problem in a range of disciplines. See, for example, V. M. Montori, M. Smieja and G. H. Guyatt, "Publication Bias: A Brief Review for Clinicians," Mayo Clinic Proceedings 75 (2000): 1284-8; A. Thornton and P. Lee, "Publication Bias in Meta-Analysis: Its Causes and Consequences," Journal of Clinical Epidemiology 53 (2000): 207-16 
- 1 Merger proposal
- 2 Publication Bias – is it always positive outcome?
- 3 there are other kinds of publication bias
- 4 the scale of the publication bias
- 5 Effectiveness of pre-registration
- 6 External links
- 7 Is the person who discoverd publication bias worth a mention?
- 8 WEIGHT - size of the problem obfuscated.
I suggest that File drawer problem should be merged with this article. The two terms are so closely related and the two articles are covering such similar ground that it doesn't seem useful to have separate articles. Publication bias is the more general term and this article is more developed so it makes sense to merge File drawer problem into here rather than vice-versa. --Qwfp (talk) 10:57, 21 February 2008 (UTC)
Publication Bias – is it always positive outcome?
Ionnidis( Why Most Published Research Findings Are False) states "Claimed Research Findings May Often Be Simply Accurate Measures of the Prevailing Bias". In some cases, wouldn't the prevailing bias be for a negative result? If the prevailing bias is that Wonderbread-and-bologna sandwiches don't improve serum glucose in diabetics, won't it be difficult to get a paper published that claims a positive result? --SV Resolution(Talk) 13:54, 3 June 2008 (UTC)
The article File drawer problem states
"Publication bias" is a more general term, as it may include differences in the availability or accessibility of published papers due to the language, format or journal of publication.
It may not be always positive. I was listening to a radio programme, which was discussing this issue. It seems that academic journals favour results that are divergent from the norm. For example, a new theory or unexpected finding is published because it is unexpected/unusual. For a period of time, the prevailing bias amongst journals are studies which support this case. However, as the theory becomes (seemingly) well grounded, there is a prevailing tendency for the publication of research which negate the theory. In light of this, I call for a change in the definition expressed in the opening paragraph of this article.--22.214.171.124 (talk) 07:37, 26 May 2011 (UTC)
there are other kinds of publication bias
- I went ahead and merged it, there really wasn't enough to stand alone once the OR was removed. Fences and windows (talk) 00:20, 29 March 2009 (UTC)
And indeed there is a "reverse publication bias" in financial economics. Timmerman and Granger (2004) argue (in "Efficient market hypothesis and forecasting", footnote 1) that:
"In studies of market efficiency, a reverse file drawer bias may be present. A researcher who genuinely believes he or she has identified a method for predicting the market has little incentive to publish the method in an academic journal and would presumably be tempted to sell it to an investment bank."
This also provides support for keeping the more general term of "publication" bias, unless we are prepared to live with the implications (of academic journals in financial economics acting as the "file drawer" for the unsold stuff:)
I managed also to trace the original "file drawer" term to Rosenthal (1979) (Rosenthal, Robert, 1979. The file drawer problem and tolerance for null results. Psychological Bulletin 86(3), 638-641.) — Preceding unsigned comment added by 126.96.36.199 (talk) 12:09, 11 December 2011 (UTC)
the scale of the publication bias
Is the "1 in 3" number attributed to Dickersin (1987) correct or sufficient? The abstract of that paper states that the ratio of published-to-discarded rejections of the null (hypothesis of treatment ineffectiveness) is 55% to 14%. Surely it is equally salient news that the proportion of significant to insignificant results is four times higher in published studies than in those which had been consigned to the file drawer (or rejected by peer review)? 188.8.131.52 (talk) 16:25, 11 December 2011 (UTC)
There were actually 3 other studies that found no difference b/w rates of publication but they never got published ;) — Preceding unsigned comment added by 184.108.40.206 (talk) 16:39, 1 November 2012 (UTC)
Effectiveness of pre-registration
I'm an outsider to pharmaceutical research, but I'd have thought that “[With pre-registration,] negative results should no longer be able to disappear” is too strong a claim: that one can still do studies not published or registered with such publications, and thus have any negative results disappear. Worse, it would seem that publication bias can still persist even within such publications, because one can choose to register a study only once one is reasonably sure that the outcome will be positive under carefully contrived circumstances while presenting the results as being general.
That is not to say that a pre-registration policy will have no effect on publication bias, merely that “[not] able to disappear” seems too strong a claim, that better wording would be something like “in an attempt to reduce publication bias”. Pjrm (talk) 04:45, 23 February 2009 (UTC)
I think the way of putting it that best keeps to the NPOV is "will still be recorded". It assumes nothing not already declared in the article (the existence of publication bias and the file drawer effect). —Preceding unsigned comment added by 220.127.116.11 (talk) 22:31, 28 July 2009 (UTC)
Sorry, I can't remember where, but I saw something about the pre-registration guidelines not actually being followed. So you still get publication bias, but worse, believe you aren't. Perhaps someone can hunt down more info on this. Could have been Ben Goldacre. 18.104.22.168 (talk) 11:44, 10 February 2010 (UTC)
I tried to delete the last item in the external links section, because it was a duplicate of the first item in the section, but someone added it back in. What is the problem? It is a duplicate.
"The Truth Wears Off: Is there something wrong with the scientific method? -- Jonah Lehrer" (the first item) goes to the same link as "interesting article on 'the decline effect' and the role of publication bias in that" (the last item).
I also added in a good link to a very good article and it was deleted. Why?
Is the person who discoverd publication bias worth a mention?
User:U3964057 is trying very hard to keep my well-sourced sentence "Publication bias was discovered by statistician Theodore Sterling, in 1959". out of the article. I wonder why. --Arno Matthias (talk) 09:48, 9 July 2015 (UTC)
- Hi Arno Matthias. Thanks for coming to the talk page. As should be clear from my edit summary, my concern is not that the content is inappropriate for inclusion in the article, but instead that it is undue weight for the lead. My suggestion would be to kick off the 'evidence of publication bias' section with a sentence along those lines. What do you, or others, think? Cheers Andrew (talk) 01:59, 10 July 2015 (UTC)
- Agreed; the lede should summarize what's in the body of the article. Fgnievinski (talk) 03:22, 10 July 2015 (UTC)
- If you wanted the sentence in, you could have moved it to a more appropriate place. Instead you deleted it. This leaves but one conclusion... I will now put it in a third time and see what happens. --Arno Matthias (talk) 08:11, 10 July 2015 (UTC)
WEIGHT - size of the problem obfuscated.
For no good reason, the article gives no clear picture of the size of the problem. For example:
0. Reference kicinski2015_19-2 states, In the meta-analyses of efficacy, outcomes favoring treatment had on average a 27% higher probability to be included than other outcomes. In the meta-analyses of safety, results showing no evidence of adverse effects were on average 78% more likely to be included than results demonstrating that adverse effects existed. In general, the amount of over-representation of findings favorable to treatment was larger in meta-analyses including older studies. This reference is, oddly, used to support this article content: The study showed that positive statistically significant findings are more likely to be included in meta-analyses of efficacy than other findings and that results showing no evidence of adverse effects have a greater probability to enter meta-analyses of safety than statistically significant results showing that adverse effects exist. The bits I underlined are vague and inferior to the use of numbers from the source. Can we fix this?
Also, the use of that reference to support this article content: A recent study showed that publication bias is smaller in meta-analyses of more recent studies, supporting the effectiveness of the measures used to reduce publication bias in clinical trials. feels
- There's a copyvio issue - that article content contains these phrases which appear to be copied verbatim from the source: publication bias is smaller in meta-analyses of more recent studies and supporting the effectiveness of the measures used to reduce publication bias in clinical trials.!
- We should state how much smaller the bias was found to be in recent studies!