Wikipedia talk:Identifying reliable sources (medicine)/Archive 14

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Archive 10 Archive 12 Archive 13 Archive 14 Archive 15 Archive 16 Archive 20

What is a low impact factor?

JzG has made some comments at Talk:Acupuncture#Suitability of content for this article regarding whether certain sources are reliable, and one issue he has discussed is the impact factor of the journals in which they are published. He asserts that the Journal of Affective Disorders and several other journals have low impact factors. However I am not sure if they are really so low that we should not be including review articles published in them as sources. What do other editors think? Everymorning talk 00:10, 15 May 2015 (UTC)

We had a similar conversation here about this awhile back. [1] Essentially, an impact factor of zero and very near it is our main red flag of a shoddy journal where we wouldn't use it. Once we start getting to the 0.5 region and above, it's highly field dependent what's considered reputable, and we aren't really in a position as editors to reject a source due to impact factor alone. At that point, you need to consider whether the idea being proposed is fringe and reject it from a weight perspective. Another way you could tackle things (though I rarely see this used here) is to look at the impact factors of all the journals being cited for reviews. If you're dealing with 10s for most journals, but a review with a unique or potentially fringe idea is at a 1, there's a good chance it's an extremely minority viewpoint there if there's even one worth mentioning at all.
Is there may a single source being discussed that could be used as an example in this discussion? Once you get over zero, we really can't be going by impact factor alone, but it could very well play a role in determining weight. In medical fields though, my gist of the topic seems to be that 1's and 2's are considered low. Kingofaces43 (talk) 00:51, 15 May 2015 (UTC)
I would support this notion. Cut off the absolute low ratings, and then take it on a case-by-case basis with the impact factor having some impact on the overall discernment.
The Seneff paper on glyphosate was published in the journal Entropy, which has an impact rating of 1.564.Entropy (journal)
That rating is not stellar, but it's acceptable. It can be considered.
I would not use the Seneff paper to support any claims, however, because it's a loose and wild ride in possibilities without careful research into whether any particular claim is likely or even truly possible. It uses sources loosely, and it speculates wildly.
There may be valid claims in the paper, but they are not developed with a degree of certainty that warrants serious reporting, yet. It is a good paper, though. SageRad (talk) 01:26, 15 May 2015 (UTC)
Low impact factor depends on the subject--there is no general rule. It is generally more useful to look at the impact of individual papers, which is what impact factor is designed to measure. DGG ( talk ) 20:39, 15 May 2015 (UTC)
DGG, is there an easy way to do that? I think some of our editors are using impact factors for journals because it's an easy way to provide an "objective" objection to a journal. WhatamIdoing (talk) 07:26, 18 May 2015 (UTC)
For a particular instance of a challenged references, certainly there is an easy way to do that--for the paper whose degree of reliability in is in question, one looks at the citations in google scholar (or ISI or Scopus). Fo rthe general evaluation of a journal, it's more complicated; I and others have done it here sometimes in afds on the notability of particular journals. It is not a mechanical operation, but requires judgment, and a knowledge of the pattern of the literature in the subject, and the other journals. There is no mechanical way of doing this; there is no shortcut. Those shortcuts sometimes tried by academic administrators have no scientific validity. DGG ( talk ) 07:39, 18 May 2015 (UTC)
Of course, there are potential pitfalls in counting citations as well. For instance, the VIGOR study has been cited more than 4,000 times, according to Google Scholar, but hopefully we're not presenting its conclusions as valid. There is ample evidence that retracted or discredited articles continue to be cited at an alarming rate (e.g. Budd 1998, and more recent coverage by Retraction Watch). MastCell Talk 16:00, 19 May 2015 (UTC)
I find it easy to concur with Kingofaces43. There's also been discussion about impact factor at Talk:Acupuncture, and many editors have expressed similar opinions. Impact factor cannot be used alone determine whether the source is reliable or not, a common mistake that some editors regularly make. I think LesVegas earlier well pointed out:[2] "On another note, you noted that journals with an impact factor of >8-10 are a good benchmark for biosciences. But the American Journal of Sports Medicine (the highest ranked orthopedic journal) isn't even close to that high! (4.699)." Jayaguru-Shishya (talk) 13:04, 16 May 2015 (UTC)
I have raised the issue of impact factors and the absence of any guidance at Wikipedia talk:Identifying reliable sources. It might help editors in general if this discussion were held there.DrChrissy (talk) 14:42, 16 May 2015 (UTC)
I agree with the viewpoint, expressed above, that impact factors have very little utility in evaluating the quality of a journal or individual papers. I guess I'm a little concerned, though, that the context seems to include a noticeable subtext of trying to shoehorn obscure, low-profile, or poor-quality literature into our articles. MastCell Talk 16:03, 19 May 2015 (UTC)

User:DGG, could I prevail upon you to write something at Wikipedia:Impact factors that might be useful to editors and maybe even save you a bit of time eventually (if it means that you won't have to repeat yourself over and over)? WhatamIdoing (talk) 06:46, 20 May 2015 (UTC)

will do. But I can;t do it now. Remind in in a few weeks. DGG ( talk ) 15:47, 20 May 2015 (UTC)

Does MEDRS cover basic historical documentation and toxin levels from chemical contamination?

In this diff you see a user saying that my source on PCB levels in people who lived with the PCB factory in Brescia, Italy was cited as insufficient using MEDRS. Is this good practice, or is it going overboard? Does MEDRS apply to this? I think MEDRS exists to make sure claims about medicines and treatments and specific human health etiologies are carefully vetted, but i don't think it should apply to things like this paper that presents historical measurements of PCB contamination levels in people who lived in Brescia, Italy while the factory was there making PCBs and disposing of the waste badly in the dirt. I don't know if i'd find another source that would give these measurements that would qualify, if this one doesn't, and therefore this sentence that i think is important and surely verifiable, would be struck.

  • Please help me to understand the scope of MEDRS, because it is being used quite often to strike my edits which mostly concern harm done by chemical spills, at the moment, as well as dynamics about glyphosate and simple statement like the presence of a certain enzyme in microbes that are in the gut microbiome. Things like this have nothing to do with suggesting treatments or medicines or practices that would in any way endanger humans if the information is wrong. These are about documenting chemical spills and dynamics about pesticides that may help our general health in the future by explaining what has happened in the past.
  • About an event like a mass poisoning in Belgium with PBCs (the Dioxin Affair), or about Yusho disease in Japan, or this one about the levels of PCBs in people who lived in Brescia and worked in the factory that made PCBs, is it really a good idea to require the level of strictness that MEDRS confers? Is it really worth the tradeoff of having far, far less information about chemical spills, if there happens to be an editor with a chip on their shoulder who doesn't want any bad information about chemical companies to be seen, and so uses MEDRS to revert edits and disappear texts into a memory hole? I would like to know people's interpretations about the scope of MEDRS -- to what it applies and to what it does not. Like anything having to do with levels of a toxin in human beings, does that absolutely have to be sources as per MEDRS or could it be a basic, solid primary research article like the one i sourced which was claimed insufficient? I stand on the side of, the more information the better, about chemical contamination events. If it's used to simply support toxin level readings, and not so much interpreting about effects or anything else, then really, isn't it sufficient? Thanks. SageRad (talk) 00:28, 13 May 2015 (UTC)
Really, wp:MEDRS is just making clear the application of broader policies at wp:V, wp:RS, wp:NOR, and wp:NOT. In this case the issue is that Wikipedia does not publish original thought, including wp:SYNthesis from primary sources. Feel free though to create that synthesis and get it published elsewhere, in a reputable peer reviewed journal. Then it will be usable. LeadSongDog come howl! 03:01, 13 May 2015 (UTC)
Are you really engaging what i asked in my question, LeadSongDog? I don't get that sense. What i am asking is, what is included in the scope of "biomedical" and isn't there a danger that we will lose ability to provide very useful information that is found nowhere else, but is *not* making a synthesis, but only providing data that will probably not be published in a "secondary source". So, if you can please engage my question and give your thoughts, i would appreciate it.
My reading of wp:MEDRS is that it's a special-case stricter sourcing guideline to make information that may be used for health claims , which means it's more than wp:V and wp:RS. Isn't that the case?
This is a very real case in which there are people who were exposed to PCBs by a factory, and i put a simple statement in the article on PCBs that levels were 10-20 times that of non-exposed populations in former factory workers. Is are you calling this a synthesis? I cited this paper:

Turrio-Baldassarri, Luigi, et al. PCDD/F and PCB in human serum of differently exposed population groups of an Italian city. Chemosphere 73.1 (2008): S228-S234. PMID 18514762

to support this statement:

Research on the adult population of Brescia showed that residents of some urban areas, former workers of the plant, and consumers of contaminated food, have PCB levels in their bodies that are in many cases 10-20 times higher than reference values in comparable general populations.

Suppose that there is a single editor among 10 who appears to want to minimize any statements that may sound bad for chemical companies. Say that this editor cited WP:MEDRS on this, claiming that it's a biomedical claim that falls under the scope of MEDRS and therefore i need to find a MEDRS-qualified source or else the text that i included will be removed. Is that how we want Wikipedia to work? Please consider carefully, is this a MEDRS claim that would affect people's health decisions badly in the future, or is it historical data? Is this a synthesis or is it ok that i am simply reporting readings of PCB levels that were in the paper? Please take my question seriously. SageRad (talk) 07:30, 13 May 2015 (UTC)
No, I am not being flip. We are here to build an encyclopedia, not to wp:RIGHTGREATWRONGS. The selection of a particular primary source among the universe of primary sources is wp:OR. It is also usually unnecessary. Most useful work does make it to a review, such as PMID 23672403. These reviews can give us a sense of which primary source findings matter, how sound they are, how to interpret them, and whether they are contradicted by other primary findings. As to your direct example, you say "Research on the adult population of Brescia" which might imply to our readers that you were representing the sum of all knowledge about that population rather than one paper. Does "in many cases" imply 40% or 4% or 4 people? How many had similarly high levels of dioxins? Did the body burden correlate to duration of exposure? Did it correlate to the proximity to the food contamination? What other anomalous exposures did that population receive? We have to rely on published experts to select the significant things to say. We cannot credibly do our own review. LeadSongDog come howl! 14:31, 13 May 2015 (UTC)
LeadSongDog, i do appreciate you revisiting my question. I see the points you're making about the specific language i wrote. To clarify, "in many cases" meant "in the case of many congeners of PCBs" and the levels cited were averages of those whole categories, like former factory workers. So that could be made more clear with the language. In this case, i was using it solely for measurements on levels, not interpreting it as body burden or effects resulting, or anything like that. Simply to show exposure levels as reflected by amounts in serum. The source is available for others to check, of course. I'm not trying to right great wrongs, but i am hoping to get some more details about specific incidents included in this article, so it better reflects the realities of the contamination events. My main question was, what is the ultimate "spirit of the law" about the MEDRS guidelines? Is it to protect against quack claims about medical etiologies, or is it to prevent data like this which relates to human health but is not interpretive, and which relates to a historical event, from being included in Wikipedia? SageRad (talk) 16:39, 13 May 2015 (UTC)
The friction usually comes when there is only one study or a handful of studies making a claim, that is not accepted as legitimate by the majority of professionals in the field. Consider the vaccine-autism lie: it was ignored as stupid until it became impossible to ignore its effect on public health, at which point people had to study it and show that vaccines don't cause autism.
Including primary sources leaves us prey to repeating the beliefs of the lone maverick, before they have been analysed or rebutted. Guy (Help!) 15:21, 13 May 2015 (UTC)
Yes, i very much see that danger, Guy, and this is why i was asking specifically about measurements relating to historical incidents, and not interpretations of etiology. SageRad (talk) 16:41, 13 May 2015 (UTC)
the point of MEDRS is to have reliable content about health. Toxicology is part of health and everything that Guy said applies to that as well. With regards to the edits you made, I
  • removed a Wiki that you had used as a source and left a "cn" tag
  • formatted the PRIMARY source you had used and added an "mcn" tag The content there does not just report measurements but makes reference to 'reference levels". I am not too happy with lumping "residents of some urban areas, former workers of the plant, and consumers of contaminated food," together but i am sick of dealing with this so did the minimum. Jytdog (talk) 17:03, 13 May 2015 (UTC)
I thank you for doing that, Jytdog and for leaving it and adding the mcn tag, and leaving the original source too.
In this question, i'm seeking to understand the scope of what falls under MEDRS and why. What's thew spirit of MEDRS? I've read the page, of course, and in the talk page archives, i searched for "scope" and read a lot. Still, i find it under-defined. SageRad (talk) 22:08, 13 May 2015 (UTC)
the intro paragraph to MEDRS was written carefully. you will see that it mentions "health information", "biomedical information", "medical knowledge", " medical and health-related content". That was done on purpose to avoid wikilawyering efforts to work around it. It is content about health - stuff people care about because it affects health. In other words, based on everything you have said about why you are here - exactly the stuff you want to write about the most. The effect of glyphosate on people, the effects of PCBs on people, etc. Jytdog (talk) 22:25, 13 May 2015 (UTC)
Ok, according to you, i will have to find a review article for every claim relating to biochemistry and humans. I probably can live with that. But if i find a relevant review that confirms the merits of a primary source, can i use specific details from the primary source that may not be mentioned in the review? Like, for example, the three subpopulations who have 10-20 times the PCB levels of baseline populations of some congeners of PCBs? SageRad (talk) 00:44, 14 May 2015 (UTC)
biochemistry is not necessarily biomedical. the context matters. but everything in WP should be sourced to secondary sources - every policy and guideline urges that. secondary sources are one of the linchpins of NPOV/NOR/VERIFY. relying on secondary sources greatly reduces the chances of people doing this then having to do this. (the original primary source was retracted in that case... it is not always that severe, but lots of primary sources in the biology space turn out to be nonreplicable) Jytdog (talk) 01:56, 14 May 2015 (UTC)

Jytdog, what i see in the first paragraph of the guideline is a reference to medical advice, and then "health information", "biomedical information", and "current medical knowledge". That, to me, implies that the purpose of MEDRS is to be very careful about any information that anyone could construe as medical advice. However, on the current question, i was asking if it's ok to use toxin levels from a PCB contamination incident in Italy, using only a primary source to support the mention of the toxin level readings. That was the text of the article which i intended to support with a primary research article in Chemosphere. I take the guideline as stating that ideally, everything related to medical interpretation that may be construed as advice to be sourced to secondary sources. But nobody ever doubts that avoiding exposure to PCBs is a good idea, so this could not be construed as providing advice. It's to tell a history. It's not science, but rather social history. SageRad (talk) 01:18, 15 May 2015 (UTC)

I think that a good primary source (=peer-reviewed/decent journal/not contradicted or disputed by other sources) can, indeed, WP:Verify a statement like "In 1932, one study reported that the level of X in Italy was Y", but if you need to use that source, is that detail really WP:DUE, and is it being used to imply a conclusion that isn't in any source, e.g., that the levels now are higher or lower than they used to be? WhatamIdoing (talk) 06:43, 20 May 2015 (UTC)
Good point. In this case, the primary source provided the information about levels of PCBs in the bodies of people who were exposed in Brescia, Italy by working or living near the factory. In that article, there is a section about PCB contamination sites around the world, and i am paying attention to balance and weight. I have pruned some of the longer descriptions, and expanded some of the shorter, or absent ones. I am paying attention to balance and weight. It's not synthesis, because it's description of PCB exposure, and we already accept that exposure or body burden of PCBs are not a good thing. I just wanted to gain an interpretation of WP:MEDRS to know whether a primary source is decent for including a fact like this. Often, a review article will mention a study, with some kind of interpretation or synthesis, but it won't cite exact data like this, and in the case of Brescia contamination, i thought this was good info and reads well in the entry. Here is the section of the article about Brescia. I wanted to remove the sentence "The values reported..." and "As a result..." and re-work the section a bit, but when i did begin, i added the sentence "Research on the adult..." and then got this "medical citation needed" tag for that. I thought that was completely reasonable as a solid sentence about this contamination site, and so i had to come here for some interpretation, because i don't think this is what WP:MEDRS is about, or is intended for, to prevent an edit of this sort. Thanks. SageRad (talk) 09:57, 21 May 2015 (UTC)
You are still trying to use outdated primary sources, and worse you are using them for statements that are both in the present tense and in the voice of the encyclopedia. LeadSongDog come howl! 13:08, 21 May 2015 (UTC)
Jeez, your level of condescension in naming this edit "still not understanding?" As for "outdated" -- the research was done when it was done. As for the phrasing, that's up for suggestion and i'd take your point about the present tense being used. Sure, that makes sense. It could say "levels in former factory workers were found to be....". As for "still not understanding?" -- heck, i understand a lot of things, and this is a dialogue that we're supposedly having, to hash out the finer points of things. What's with the tone here? SageRad (talk) 13:17, 21 May 2015 (UTC)
Sage, you've been here about a month, and during that month you've been in conflict with nearly every editor you've had contact with. I suppose its possible that the dozen or so of us that you've interacted with are inappropriately censoring you, are unable to properly understand Wikipedia's policies and guidelines, and/or are pro-industry shills, but given that you apparently came here after having similar disagreements on other websites, perhaps its time to consider whether your own behavior is the problem? Formerly 98 talk|contribs|COI Statement 14:44, 21 May 2015 (UTC)
Sage, you've been given links to the policies and guidelines that pertain, yet you act as if you don't understand what they say. Either you don't understand, or you don't care. I'd prefer to believe the former, but you are making it difficult. "Dated" refers to wp:MEDDATE, an important part of wp:MEDRS. Your alternative of "...were found to be..." (as distinct from "...were reported as...") implies that the primary reports were accurate, significant, correctly reported, and never retracted. We do not have evidence that this is the case, which is exactly why we require current secondary sources, to avoid commiting original research. The apparent absence of secondary sources, even after all these years, could be taken as implying that the primary source you want to use is not seen by experts as worthy of note. Now, is there something about that which is still unclear, and if so what? LeadSongDog come howl! 15:08, 21 May 2015 (UTC)

LeadSongDog, i do care, very much, and that is why i am here asking for serious dialogue about this. I hve read and quoted the policy and asked for clarification, and yet i find that there is a smell of bias here. Here's an example. I understand the concept of "dated" and i do know that more recent research is preferable when available. I also know the guideline about MEDRS wanting a review article from the last 5 years, preferably. But i also know that this is largely for etiology of disease and health because of the extreme need for accuracy on any attribution of causal connections relating to human health. The historical data on a contamination event is often studied within a few year of the event, or at a certain point in time and then not again, because it's already been done. And it's also not an etiology thing. It's a measurement data thing. It's a simple measurement of PCB levels in certain populations exposed to contamination. I would like your admission on these points, one by one, or denial and reason behind it.

  • As to your wording suggestions there, that's fine for me to hear and consider and discuss with other editors. I appreciate those distinctions. I wish it could be in a positive spirit of cooperation instead of an addendum to a fractured and contentious dialogue. I would love to be methodical and address each point until completion, but these dialogues do not go that way and tend to be one after another topic and never completion on one claim, and then to build up not to the reality but to the appearance of a preponderance, and that's not cool. That's really a sort of bulldozer approach to dispute resolution and it's not inherently just or correct or leading to good results. It's more like a herd attack. SageRad (talk) 17:56, 21 May 2015 (UTC)
Also, you forgot to sign your comment.
So, in summary, i would like admission or denial, with cause, as to the fact that the claim is not an etiology claim.
Secondly, it's a historical event report, not a topic of ongoing research.
Thirdly, in light of the second claim, the preference for more recent research is less strong.
Fourthly, there's a lot to this conversation, and i am not being neither stupid nor uncaring.
SageRad (talk) 17:59, 21 May 2015 (UTC)
Ok, I signed it, thanks for pointing that out. Apparently you have not understood MEDDATE: "If recent reviews do not mention an older primary source, the older source is dubious." That's pretty direct. Failure to understand does not mean that you are being stupid, and nobody said that you were, as there are many other possible explanations. In some similar cases, cognitive dissonance has been at play, but in the end, does it matter why an edit doesn't conform to policy?
You started off with "I think MEDRS exists to make sure claims about medicines and treatments and specific human health etiologies are carefully vetted, but i don't think it should apply to things like this paper that presents historical measurements of PCB contamination levels in people who lived in Brescia, Italy while the factory was there making PCBs and disposing of the waste badly in the dirt", and then repeatedly demanded a response. You haven't gotten a reply on etiology because MEDRS says nothing about etiology and little about disease. Its scope is much broader. When you characterize a "simple measurement of PCB levels in certain populations" you betray a great deal of faith in the capabilities of the researchers involved. Populations are rarely "certain", and measurements on blood chemistry in those populations are rarely "simple". How much of the veal did these farmers consume? Over what time period? Over what geographic area did the animals graze? Did the animals consume surface or well water? The humans? What time elapsed between last exposure and testing?
We can't assess these things, we must leave it up to published experts. We need those secondary sources. Please accept this and direct your considerable energy to finding those sources. LeadSongDog come howl! 21:19, 21 May 2015 (UTC)

I don't have time to sort through this dispute personally, so I'd like some quick answers from the objectors:

  1. What's the approximate review cycle length for this specific subject? (Review cycle = I publish a review article based on n primary sources, then you read what I wrote, decide that it's outdated because of a study published after my review was written, and so you publish your own review, to include the newer source(s).)
  2. If the source he cited was four or five years old, rather than seven, would you still be objecting to its inclusion (specifically) per MEDDATE? WhatamIdoing (talk) 22:36, 22 May 2015 (UTC)
It isn't a review at all, it's a primary source, so yes the objection would stand. LeadSongDog come howl! 16:16, 23 May 2015 (UTC)
LeadSongDog, an objection per WP:MEDPRI is not the same thing as an objection per WP:MEDDATE. I want to know whether MEDDATE specifically matters, not whether you'd accept the source overall. WhatamIdoing (talk) 03:49, 1 June 2015 (UTC)
The two are not independent. While secondary sources are sometimes acceptable beyond five years in areas where little new research is being done, primary sources are normally a short-term measure to be used for only enough time to allow secondary sources to be published: one or two years should be tops in all but the most obscure subject areas.LeadSongDog come howl! 05:01, 1 June 2015 (UTC)

This is all very interesting, but what i'm trying to communicate here is that there may be studies of this kind which never make it into a review article because they are of a more historical nature than an etiology-seeking nature, and therefore are not the kind that get wrapped into topical review articles from time to time. A paper on a potential causal pathway for a disease, for example, would be included in a review article fairly soon, if it were valid and worthy. But, a paper on serum levels of people exposed to PCBs in a historic incident may not be. There may be no forthcoming review article about PCB contamination events because it's not the sort of topic that gets this kind of periodic review. It's more historical. I would love to see some acknowledgement of this distinction, or even some disagreement to this point. I'd like this point addressed, because it's the crux of this discussion, in my mind. I would like to be able to include data about historical events regarding human health even if they're not included in a review article because this kind of study does not get reviewed very often. SageRad (talk) 21:45, 1 June 2015 (UTC)

Systemic bias 1

BullRangifer has proposed endorsing systemic bias based on country of origin here:

There are serious red flags regarding Chinese medical research. Especially China, Japan, Russia/USSR, and Taiwan have been shown guilty of systemic bias in favor of traditional Chinese medicine subjects like acupuncture.[1] Editors should consider carefully how to manage sources from these countries. The problem also includes peer review scams.[2][3]

References

  1. ^ "Some countries publish unusually high proportions of positive results. Publication bias is a possible explanation. Researchers undertaking systematic reviews should consider carefully how to manage data from these countries." Vickers, Andrew (April 1, 1998), Do certain countries produce only positive results? A systematic review of controlled trials., Control Clin Trials, retrieved May 11, 2015 {{citation}}: Italic or bold markup not allowed in: |publisher= (help)
  2. ^ Ferguson, Cat (November 26, 2014), Publishing: The peer-review scam, Nature, retrieved May 11, 2015 {{citation}}: Italic or bold markup not allowed in: |publisher= (help)
  3. ^ Qiu, Jane (January 12, 2010), Publish or perish in China, Nature, retrieved May 11, 2015 {{citation}}: Italic or bold markup not allowed in: |publisher= (help)

I'm not sure that this information should be included here at all, but I'm certain that this isn't the best way to handle it. It declares that almost all publications by nearly all Asian researchers are suspicious based on the actions of just a couple of individuals for the peer-review scams.

While the 1998 study was limited to acupuncture, it's conclusions are outdated (it's based upon the three decades of papers, ending two whole decades ago) and it's not at all clear that these societies were treating TCM any differently from any other subject. Communist research frankly sucked. It worked pretty much like a first-year chemistry lab: find out what the acceptable answer is, draw the curve, plot the points, and, last of all, take your measurements and make sure they line up with what the teacher wants. That doesn't mean that all Russian and Chinese research publications are as bad now, and we shouldn't be presenting this 17-year-old primary study as if it said anything useful to tell us about recent research work.

The English Wikipedia has enough of a problem with systemic bias. I don't think that we should encourage it. WhatamIdoing (talk) 20:10, 11 May 2015 (UTC)

Yow! And the assertion is that English language publications are less biased? Don't think we want to get into that food fight. Formerly 98 talk|contribs|COI Statement 20:19, 11 May 2015 (UTC)
There is no such assertion. We are just doing what the research recommended. Note that the wording is lifted, with some alteration, directly from the review: "Researchers undertaking systematic reviews should consider carefully how to manage data from these countries." That's good advice for us. This is far from "a couple". If we find such reviews showing systemic bias in other countries, we can add that. There is no reason to reject what we do know to be true.
The systemic bias WAID complains about is our systemic bias for quality research. This content reinforces that legitimate bias. By not warning against the inclusion of shoddy research, the balance tips away from quality research toward shoddy research in favor of TCM and acupuncture, which would suit acupuncturists just fine. This deletion undermines our efforts to improve the quality of our sources by warning editors to avoid shoddy research. I'm sure it can be worded better, but it should not be removed entirely. -- BullRangifer (talk) 05:59, 12 May 2015 (UTC)
  • one of the two sources about chinese publishing is dated to 2014, and there is an update to the Vickers review dated 2014 written by (ahem) Chinese people. Reporting on systemic problems is not a reflection of systemic bias; that label is inaptly applied, i think. I would support a revision to make this more narrow and update the sourcing as follows:

As of 2015, there are concerns regarding positive bias in publications from China on traditional chinese medicine.[1][2] Such sources may be red flagged. The problem also includes issues with the peer review system in China.[3][4]

References

  1. ^ Li J, et al The quality of reports of randomized clinical trials on traditional Chinese medicine treatments: a systematic review of articles indexed in the China National Knowledge Infrastructure database from 2005 to 2012. BMC Complement Altern Med. 2014 Sep 26;14:362. PMID 25256890
  2. ^ "Some countries publish unusually high proportions of positive results. Publication bias is a possible explanation. Researchers undertaking systematic reviews should consider carefully how to manage data from these countries." Vickers, Andrew (April 1, 1998), Do certain countries produce only positive results? A systematic review of controlled trials., Control Clin Trials {{citation}}: Italic or bold markup not allowed in: |publisher= (help)
  3. ^ Ferguson, Cat (November 26, 2014), Publishing: The peer-review scam, Nature {{citation}}: Italic or bold markup not allowed in: |publisher= (help)
  4. ^ Qiu, Jane (January 12, 2010), Publish or perish in China, Nature {{citation}}: Italic or bold markup not allowed in: |publisher= (help)

Thoughts? Jytdog (talk) 22:41, 11 May 2015 (UTC)

This is a major issue which does exist, not just in China, though it is probably best documented there - e.g. I'm also aware of this as well. I also wouldn't characterize the problems described in citation 4 as "a couple of individuals" - they do provide an example but use it as an illustration of broader issues, e.g. "China's science ministry commissioned a survey of researchers...one-third of more than 6,000 surveyed across six top institutions admitted to plagiarism, falsification or fabrication." I do think the narrower wording is better, though perhaps with an acknowledgement that the problem may be broader. (I would also be careful to distinguish this as very different from saying non-English sources are more likely to be biased. It's true that the highest-quality research is disproportionately reported in English, as the lingua franca of science - e.g. the language of the top journals - but that is a separate issue which is unrelated to bias per se.) Sunrise (talk) 00:54, 12 May 2015 (UTC)
I think this is a variation on a WP:BEANS issue. The people who can correctly interpret and apply the advice you're trying to give them already know there are issues with some of this work, and the people who don't know it don't understand what they're reading anyway and will try to lawyer around whatever advice you give them. Opabinia regalis (talk) 06:10, 12 May 2015 (UTC)
It's not often that we get solid reviews proving consistent and widespread systemic bias in ONE particular subject area (TCM/acupuncture), and the researchers recommend that "Researchers undertaking systematic reviews should consider carefully how to manage data from these countries." We should do the same in our evaluation of sources. This is clearly an improvement of MEDRS and should be restored. (Keep in mind that the initiation of this thread is extremely misleading and not to be trusted.) -- BullRangifer (talk) 06:47, 12 May 2015 (UTC)
That indeed might be the case, but instead of mere speculation we need sources to confirm this. Publication bias is quite regularly studied, it shouldn't be a problem to find such study. Cheers! Jayaguru-Shishya (talk) 13:18, 12 May 2015 (UTC)
There is no speculation. Some of the sources are used as references. The very idea for this content comes from referenced peer reviewed reviews on the subject. These countries have a well documented systemic bias using shoddy research to promote acupuncture and TCM. That's a huge red flag, and we should note this in MEDRS. -- BullRangifer (talk) 14:17, 12 May 2015 (UTC)
This is indeed a concerning state of affairs but it does appear to be well-supported that it's a problem. I'm not 100% sure something so topic-specific should be in MEDRS the guideline but it needs to be documented somewhere, maybe in a FAQ at the affected articles. Or possibly an essay that could be linked to when necessary when a relevant discussion arises at an article Talk page. Zad68 14:28, 12 May 2015 (UTC)
@BullRangifer: So will you share that "well documented systematic bias" with us, please? Jayaguru-Shishya (talk) 20:10, 12 May 2015 (UTC)
??? The references are right in the content. Also they are presented below, with more, by others. The bias is well documented in high quality sources, and it's a huge bias we should not ignore. -- BullRangifer (talk) 02:08, 13 May 2015 (UTC)

← There are plenty of reliable sources to substantiate the concern over research fraud in China, often attributed to the academic culture there. A short list would include:

A 2009 study by a Chinese computer scientist estimated that research fraud was a $150 million-a-year industry in China (in the form of ghostwriting services, fabrication of research, bribery to bypass peer review, and forgery). The study is mentioned in a number of the above sources, although I'm having trouble finding a direct link to it. In any case, this phenomenon is a real source of concern in the scientific world, as the above sources demonstrate. Now, I don't think we need to write anything into our guidelines and policies to address it—and I don't really want this to become another front in the acupuncture/TCM wars—but it is important to be aware that this is a real concern and not simply editorial speculation. MastCell Talk 15:28, 12 May 2015 (UTC)

MastCell, I don't doubt it at all. Actually, that's why there are studies on the publication bias, and I think we should first preset those before jumping any conclusions of our own. Scientists are scientists, we are Wikipedia editors. Jayaguru-Shishya (talk) 20:29, 12 May 2015 (UTC)
One study found that 99.8% of randomized controlled trials of acupuncture published in China had a positive result. 99.8%. If that doesn't scream systematic bias I don't know what does. That level of bias I'm sure has had the intended effect on skewing systematic reviews in favor of their chosen result.... Yobol (talk) 21:01, 12 May 2015 (UTC)
  • It would be very silly to pretend this issue does not exist. It would be equally silly to pretend that "Western" scientific sources have an equal and opposite bias. The problem of systemic bias applies especially to China and Russia, the literature in most countries has no systemic bias other than towards reality and empirical verifiability. The idea that "reductionist science" is biased against empirically unverifiable notions is a fallacy to which Wikipedia should not succumb. Guy (Help!) 10:21, 13 May 2015 (UTC)
  • Guy, I think I understand what you mean, but maybe this is what you intended to say...? "The idea that "reductionist science" should not be biased against empirically unverifiable notions is a fallacy...." because our bias should be against such notions. They are not founded in reality or promoted by RS. We will still document them, because they are part of the sum total of human knowledge and experience, but we won't give them any credence. -- BullRangifer (talk) 14:47, 13 May 2015 (UTC)
    I suspect, BullRangifer, that you and Guy are using "bias" in two different senses. Guy is using it to mean "irrationally prejudiced against...". You are reading it to mean "likely to produce results (when carried out honestly and correctly) which contradict...". In other words, Guy is noting that the scientists don't have a bias against acupuncture and TCM, whereas you are noting that reality does. TenOfAllTrades(talk) 15:14, 13 May 2015 (UTC)
Yes (and actually yes to both, but I meant what I said). Science, as a body of knowledge or system of inquiry, does not care in advance what the outcome of a test will be. If a test refutes the null hypothesis and provides robust proof that homeopathy works, then science will have to pick up the pieces, rewrite the laws of thermodynamics, conservation of energy and mass action, and so on. The point about SCAM is that a positive outcome seems, according to the evidence, to depend on the investigator having a vested interest in the outcome.
The only bias science has is against false claims. Guy (Help!) 15:17, 13 May 2015 (UTC)
Ah! Okay then. I think we believe the same way. I just tend to see "bias" as a positive or negative leaning, and the negative being the same as a "prejudice", but that's just my careless use of language. I'm a bit "language confused", since I've lived in Europe most of my adult life and my daily language is Danish (or Swedish or Norwegian), rather than my mother tongue, which is English. That means my English grammar, punctuation, etc. aren't always correct, so don't hesitate to correct me.
For more about my use of the word "bias", see here, under "synonyms": "A bias may be favorable or unfavorable: bias in favor of or against an idea. Prejudice implies a preformed judgment even more unreasoning than bias, and usually implies an unfavorable opinion: prejudice against people of another religion."
Science is biased toward evidence, just as editors here should be biased toward RS. -- BullRangifer (talk) 15:38, 13 May 2015 (UTC)

The other day DrChrissy questioned whether or not this line of thought was racist and I can definitely see his reason for asking the question. It does seem that we could be stereotyping if we ever ignore research simply because of its author's country of origin. Just as we would never say all black people are X, it might be okay if we say "this individual black person is X based on these criteria", but even then we had better be cautious because we might rightfully be accused of being not very PC. To not take the same cautions regarding research is wrong and the Wikipedia community at large will really be upset if they find out these are the proposed new guidelines. And don't forget some research that comes from China has positive results simply because they have different testing methodologies resulting from their unique ethics. It is systemic bias if we ever refuse to include research from other cultures and we should tag articles where this is taking place. LesVegas (talk) 19:00, 13 May 2015 (UTC)

Exactly. I am still waiting for those studies on the publication bias. I don't want to hear any explanations like "Some countries publish unusually high proportions of positive results. Publication bias is a possible explanation.". Here, "might be" might have two meanings: whether it has not been studied (quite basics), or more likely it has been studied but it didn't yield any results worthy of publishing. Jayaguru-Shishya (talk) 19:52, 13 May 2015 (UTC)
Jayaguru-Shishya, I'm at a loss at how we can help you anymore, since you have refused to look (after being told repeatedly) at the research and sources we have presented to you above. It is not editorial speculation which you are rejecting, but reliable sources and a review of the literature showing the current miserable state of affairs. Much of the literature is outright fraudulent. You even refuse to accept the exact wording from the review! Such a refusal is OR speculation on your part. -- BullRangifer (talk) 05:59, 14 May 2015 (UTC)
BullRangifer, I don't doubt at all that there might be serious problems with the Chinese studies. And that's the very reason we are implementing statistical methods instead of mere speculation. Statistics, BullRangifer, statistics. Unfortunately I don't have access to the full text, but so far the conclusions state: "Publication bias is a possible explanation.". Note: possible, so that would indicate that it hasn't been studied in that particular report. Anyway, that's not WP:OR, BullRangifer, that's a direct quote. Jayaguru-Shishya (talk) 14:29, 14 May 2015 (UTC)
please don't comment on papers you haven't read. Jytdog (talk) 14:39, 14 May 2015 (UTC)
Why not? I have requested numerous times for the studies on the publication bias. Whether it is there, or then it's not. If it is, please bring it to the attention of all of us. Thank you a lot in advance, Jytdog! Jayaguru-Shishya (talk) 14:43, 14 May 2015 (UTC)
Facepalm Facepalm Jytdog (talk) 14:47, 14 May 2015 (UTC)
May I respectfully correct LesVegas, I believe I only asked the question of whether it was racist...I don't think I made an unequivocal statement that it, or the editor, were racist.DrChrissy (talk) 21:47, 13 May 2015 (UTC)LesVegas (talk) 01:47, 14 May 2015 (UTC)
Jytdog, have you seen your Talk Page[3] even: "Do you have an access to this article[6]? If so, could you please send me that one? I'd like to see if and how they might have possibly studied the subject." No need for facepalms, just try to keep cool, will you? So far, the conclusion doesn't sound really confident about it's own findings. Cheers! Jayaguru-Shishya (talk) 14:52, 14 May 2015 (UTC)
DrChrissy, I am so sorry I got that wrong! You are absolutely correct! I changed my statement above to more accurately state what you said. Please feel free to correct me anytime I get something wrong like that. I can't apologize enough. LesVegas (talk) 01:47, 14 May 2015 (UTC)
yes. you should not discuss sources you haven't read. that is scholarship 101. if your scholarship is so abysmal that you actually need to be taught that, please read Wikipedia:Identifying_reliable_sources_(medicine)#Choosing_sources. Jytdog (talk) 14:55, 14 May 2015 (UTC)
Absolutely not a problem - I guessed it was just a slip of the keys.DrChrissy (talk) 09:30, 14 May 2015 (UTC)
Please don't discredit yourself any further, Jytdog. Back to statistics, is there any studies on the publication bias, or is it mere speculation? I am also waiting for the source to check it myself. Cheers! Jayaguru-Shishya (talk) 15:41, 14 May 2015 (UTC)
If LesVegas is in agreement, I am happy to have the above 3 comments and this one hatted as "Off Topic" or something like that.DrChrissy (talk) 09:36, 14 May 2015 (UTC)
please read the references already provided. Jytdog (talk) 15:44, 14 May 2015 (UTC)
Absolutely in agreement, Doc! I would hat them off myself if I knew how, so by all means go ahead. The statistical discussion above all of that is far too important. LesVegas (talk) 14:53, 14 May 2015 (UTC)
Sure, Jytdog. Waiting for you to send the one I've asked. Cheers! Jayaguru-Shishya (talk) 15:51, 14 May 2015 (UTC)
do not hat my comment, thanks. JS check your email, as i mentioned on my talk page. it is your responsibility to get your hands on sources; continually asking if there is evidence about X when you have been provided sources goes beyond bad scholarship to bad faith discussion. i won't be responding to you further. Jytdog (talk) 15:57, 14 May 2015 (UTC)
Jayaguru-Shishya i emailed you through the WP system at 11 AM and have nothing back from you. I guess you are getting the Vickers reference some other way. Jytdog (talk) 22:05, 14 May 2015 (UTC)
You did? A big thanks, Jytdog! I'll have a look tomorrow, it's getting rather late here right now ... although I have a day off tomorrow! :-D Thanks! Jayaguru-Shishya (talk) 22:59, 14 May 2015 (UTC)

Jayaguru-Shishya, such an analysis would be a form of OR. This is from MEDRS: "Editors should not perform a detailed academic peer review. Do not reject a high-quality type of study due to personal objections to the study's inclusion criteria, references, funding sources, or conclusions." It seems you are pushing for rejecting the authors' conclusions if their statistics don't pass your peer review.

This is very confusing. We usually accept the conclusions of review authors unless there is unequivocally something wrong pointed out by other RS. Your objections are not based on anything like that, but the uncomfortable fact that research from these countries which supports acupuncture cannot be assumed to be reliable. That undermines the foundations of your support for acupuncture, which is understandably an uncomfortable position, but not grounds to reject a peer reviewed review based on your own OR speculation or own OR analysis.

By contrast, the rest of us are taking the review at face value as a call to increased awareness and caution when dealing with positive research, especially from these countries. This is a huge red flag which should be part of MEDRS. -- BullRangifer (talk) 14:47, 15 May 2015 (UTC)

You said: "That undermines the foundations of your support for acupuncture, which is understandably an uncomfortable position", at this point I'd urge you to keep your cool, BullRangifer. I am not in support of acupuncture, but I do embrace Wikipedia:Verifiability and proper use of sources. As I have already said earlier, I don't doubt the questionable reliability of Chinese sources. I just got my hands on the source today and indeed, the Vickers article does not study publication bias, but deals with the reliability of those sources (China, Japan, Russia/USSR and Taiwan) in general. It suggests that publication bias is a possibility among five other possible explanations, but it does not study it. The other five explanations are 1) the "sample of trials may not have been representative", 2) the "abstracts may not have accurately reflected the results of trials", 3) their "judgements of whether the test treatment was superior to control were, in some cases, subjective. ", 4) trials "conducted in certain countries may involve more outcome measures and “data dredging."", and 5) trials "may have been conducted with different levels of methodologic rigor.".
Does the source urge editors to caution with respect to Chinese sources? Yes, it definitely does. Is the source a study on publication bias? No, it's not. As I have said, I don't doubt the questionable nature of Chinese sources, but I am saying that the article doesn't study publication bias. There's a big difference. Jayaguru-Shishya (talk) 14:04, 16 May 2015 (UTC)
Excellent response! Respect and great thanks, and apologies for the impatience, coming your way. We simply didn't understand what you were driving at. So have we been too simplistic? The problem is larger by a factor of six. We have used the term "systemic bias", rather than "publication bias". Is "systemic bias" inclusive enough to encompass those six factors? -- BullRangifer (talk) 16:10, 16 May 2015 (UTC)
No problem, apology accepted. It's good we got the misunderstanding solved out now. Anyway, I think we need find a good way to address the problem with respect to the sources in whole. Cheers! Jayaguru-Shishya (talk) 19:50, 16 May 2015 (UTC)

Systemic bias 2

To make this easier, I'm starting a new section. Here are the three suggested wordings so far. The first two use the 2014 Vickers review, and more:

1. This was mine, which is at the top of the previous section

There are serious red flags regarding Chinese medical research. Especially China, Japan, Russia/USSR, and Taiwan have been shown guilty of systemic bias in favor of traditional Chinese medicine subjects like acupuncture.[1] Editors should consider carefully how to manage sources from these countries. The problem also includes peer review scams.[2][3]

References

  1. ^ "Some countries publish unusually high proportions of positive results. Publication bias is a possible explanation. Researchers undertaking systematic reviews should consider carefully how to manage data from these countries." Vickers, Andrew (April 1, 1998), Do certain countries produce only positive results? A systematic review of controlled trials., Control Clin Trials, retrieved May 11, 2015 {{citation}}: Italic or bold markup not allowed in: |publisher= (help)
  2. ^ Ferguson, Cat (November 26, 2014), Publishing: The peer-review scam, Nature, retrieved May 11, 2015 {{citation}}: Italic or bold markup not allowed in: |publisher= (help)
  3. ^ Qiu, Jane (January 12, 2010), Publish or perish in China, Nature, retrieved May 11, 2015 {{citation}}: Italic or bold markup not allowed in: |publisher= (help)
2. This was contributed by Jytdog

As of 2015, there are concerns regarding positive bias in publications from China on traditional chinese medicine.[1][2] Such sources may be red flagged. The problem also includes issues with the peer review system in China.[3][4]

References

  1. ^ Li J, et al The quality of reports of randomized clinical trials on traditional Chinese medicine treatments: a systematic review of articles indexed in the China National Knowledge Infrastructure database from 2005 to 2012. BMC Complement Altern Med. 2014 Sep 26;14:362. PMID 25256890
  2. ^ "Some countries publish unusually high proportions of positive results. Publication bias is a possible explanation. Researchers undertaking systematic reviews should consider carefully how to manage data from these countries." Vickers, Andrew (April 1, 1998), Do certain countries produce only positive results? A systematic review of controlled trials., Control Clin Trials {{citation}}: Italic or bold markup not allowed in: |publisher= (help)
  3. ^ Ferguson, Cat (November 26, 2014), Publishing: The peer-review scam, Nature {{citation}}: Italic or bold markup not allowed in: |publisher= (help)
  4. ^ Qiu, Jane (January 12, 2010), Publish or perish in China, Nature {{citation}}: Italic or bold markup not allowed in: |publisher= (help)
3. This is now actual content, contributed by QuackGuru, and only using Ernst 2012

Chinese authors use more Chinese studies, which have been demonstrated to be uniformly positive in respect to acupuncture research.[1]

References

  1. ^ Ernst, Edzard (2012). "Acupuncture: What Does the Most Reliable Evidence Tell Us? An Update". Journal of Pain and Symptom Management. 43 (2): e11–e13. doi:10.1016/j.jpainsymman.2011.11.001. ISSN 0885-3924. PMID 22248792.

How can we improve this, based on the six factors mentioned by Jayaguru-Shishya at the end of the previous section? Does "systemic bias" still cover the subject well enough, or should we add more?

I think we should at least include more sources in the reference, as well as the wording of the actual six factors, and also deemphasize "publication bias", since it is not the only factor covered under "systemic bias". Thanks so much to Jayaguru-Shishya. -- BullRangifer (talk) 16:10, 16 May 2015 (UTC)

I agree with BullRangifer. Publication bias should be de-emphasized since the article doesn't even study it, and we should find a better way to address the problem as a whole. Is it best done by adding the six factors as mentioned in the article? I don't know, but perhaps additional sources could serve as enforcement. The Vickers article is discussing these factors as possibilities, but I think it's rather good discussion they're having. For example, when it comes to explanation No. 1 (the "sample of trials may not have been representative") they say that:

Possibly, negative trials originating from eastern Europe and Asia are found solely in non-Medline journals. We believe that this is unlikely; Medline might be expected to be a conservative source of information on unconventional therapies. Moreover, anecdotal evidence, such as the results presented at acupuncture conferences, does not suggest any considerable number of negative results published in non-Medline journals. For example, of the many hundreds of trials reported at the third World Conference on Acupuncture [4], we were unable to locate any studies originating in East Asia that showed acupuncture to be equal or inferior to a control procedure.

I guess there is some thinking to be done in order to address these in an appropriate manner and to avoid representing them as too much of "carved in stone" kind of truths. I believe, however, that something can be worked out. As I originally pointed my remarks to the wording number one (the only one referring to Vickers), I think it could be used as the basis for further improvements. Jayaguru-Shishya (talk) 19:47, 16 May 2015 (UTC)
Please read the other three sources before you pronounce, Jayaguru-Shishya. Thanks. Jytdog (talk) 20:34, 16 May 2015 (UTC)

I particularly like some aspects Jytdog's phrasing: "As of 2015" identifies that it is a current (but potentially temporary) situation, although strictly speaking, it ought to say "As of 2014" (publication date) or "As of 2012 (most recent sources used in the study). Also, saying "from China" identifies it as a geopolitical problem rather than a racial problem.

I think that "red flagged" is unclear; something like "should be used with caution" would make more sense. I'd drop the elderly Vickers source, and I'm not sure that peer-review scams is either accurate (it affected journals all over the world, not just in China) or necessarily worth mentioning (journals have improved their security since then and retracted hundreds of articles). WhatamIdoing (talk) 16:30, 17 May 2015 (UTC)

I really like some of your thoughts about Jytdog's wording. -- BullRangifer (talk) 19:58, 17 May 2015 (UTC)
4. New proposal

Okay, trying to integrate the above discussion into text:

As of 2014, there are concerns regarding positive bias in publications from China on traditional chinese medicine.[1][2] Such sources should be used with caution. The problem also includes issues with the academic system in China.[3]

References

  1. ^ Li J, et al The quality of reports of randomized clinical trials on traditional Chinese medicine treatments: a systematic review of articles indexed in the China National Knowledge Infrastructure database from 2005 to 2012. BMC Complement Altern Med. 2014 Sep 26;14:362. PMID 25256890
  2. ^ Further information:
  3. ^ Qiu, Jane (January 12, 2010), Publish or perish in China, Nature {{citation}}: Italic or bold markup not allowed in: |publisher= (help)

Changes from Jytdog's version:

  • "As of 2015" -> "As of 2014" as the date of publication
  • "may be red-flagged" -> "should be used with caution"
  • As a compromise for Vickers and Ernst, I've put them in a "Further information" reference.
  • Dropped the reference to peer-review scams. I replaced "peer-review system" with "academic system" since that's better supported by the remaining reference.

Thoughts? Sunrise (talk) 11:14, 20 May 2015 (UTC)

Better. Shall we follow the modern approach of making the perfect be the enemy of the good, or shall we be all old-fashioned and wiki-like, and start here and improve later if someone has brilliant insights? WhatamIdoing (talk) 22:27, 22 May 2015 (UTC)

If we are still (at this point) talking about adding something to MEDRS to deal with specific issues, articles, or sources, I think Zad gets it right:

'm not 100% sure something so topic-specific should be in MEDRS the guideline but it needs to be documented somewhere, maybe in a FAQ at the affected articles. Or possibly an essay that could be linked to when necessary when a relevant discussion arises at an article Talk page. Zad68 14:28, 12 May 2015 (UTC)

I do not support the idea of mentioning specific cases in MEDRS-- have we mentioned the frequent copyright issues found in Indian journals? Where does it stop? Content areas can have FAQs, or we can make a general FAQ for this page. SandyGeorgia (Talk) 01:35, 23 May 2015 (UTC)

I do not support the inclusion of MEDRS additions like this. At best, I agree with SandyGeorgia above, but in that instance, for different reasons. As Jayaguru-Shishya has argued, the standard here must be supported by iron clad statistics that have exhausted all other possibilities why there is a discrepancy in research findings. For instance, has any of this research tackled the idea that if a Chinese person drinks "TCM herbal formula X" that person will receive more of a placebo benefit than a U.S. citizen who drinks the same formula and doesn't have the same cultural understanding of herbs and might not believe in it? If you receive more of a placebo response you are also receiving more of a total therapeutic response, and if studies fail to address issues such as these, how can we possibly red-flag a specific nation's research at the guideline level? There may be other, legitimate reasons for discrepancies in research findings other than simply fraud or bad science which we imply is the case with guidelines like this. And why are we discussing this as systemic bias when it is a publication bias issue? To me, systemic bias is exactly what we're doing when we undervalue or overvalue research or ideas from one area of the world, like we're doing here. LesVegas (talk) 13:30, 23 May 2015 (UTC)
No support for using specifics in MEDRS per SandyGeorgia. Sources are reliable or not per the specific content they support. Simple.(Littleolive oil (talk) 15:01, 23 May 2015 (UTC))
  • Blinding ourselves to source bias because it's impolite isn't the answer. There's a noted publication and results bias favouring TCM from Chinese sources, which means that we need to tread carefully when using them. Similarly, we know that there's pressure on Indian researchers to find ayurveda effective, although I'm unaware of sources substantiating an effect with equivalent methodological rigor as we find in studies of Chinese sources. The purpose of MEDRS is to provide guidance as to when a source is reliable, and studies of mythology-based medical systems from countries where that particular mythology is widespread don't have the same weight as when performed by outsiders.—Kww(talk) 19:19, 24 May 2015 (UTC)
I don't see anyone suggesting politeness is an issue. Perhaps I've missed that. There is a fundamental point that concerns all RS, and that is that sources are viewed per the content they are designed to support or content is reliable per the sources chosen to support them. Generalizations on sources that include entire countries is dangerous and has the potential for misuse. (Littleolive oil (talk) 19:38, 24 May 2015 (UTC))
Certainly has the potential for misuse, but shouldn't have any long-term effects. If TCM or ayurveda produces any results that will survive objective examination, researchers from other countries will eventually report them.—Kww(talk) 21:03, 24 May 2015 (UTC)
I generally like the most recent (#4) proposal in concept, except I agree with Sandy in that I'm iffy about putting specific cases like this into MEDRS. A possible thought I had is that since specific cases may not fit on the main guideline, page, how about making a FAQ for the talk page here to high light specific cases/questions for easier reference? We don't absolutely need something in MEDRS itself, but just showing that there's consensus for a general course of action (be cautious about Chinese sources in this topic) seems good for a small bulleted list where we could may include other commonly discussed specific topics. Thoughts? Kingofaces43 (talk) 22:34, 24 May 2015 (UTC)
That's a good idea. I think that approach is the best compromise. bobrayner (talk) 12:13, 25 May 2015 (UTC)
There's already a /FAQ. I'll add a few comments, and you all can revise them. WhatamIdoing (talk) 04:07, 1 June 2015 (UTC)
That makes it easier since it already exists. :-) I already started drafting something, so I'll paste in some of the stuff I have and will leave the incomplete parts in hidden comments. Sunrise (talk) 09:12, 3 June 2015 (UTC)