Talk:Randomized controlled trial
|This is the talk page for discussing improvements to the Randomized controlled trial article.
This is not a forum for general discussion of the article's subject.
|Randomized controlled trial has been listed as a level-4 vital article in Mathematics. If you can improve it, please do. This article has been rated as B-Class.|
|This article is of interest to the following WikiProjects:|
- 1 What do we do about the inherent limitations? Recommendations?
- 2 Why
- 3 control(led)
- 4 Duplication
- 5 Content inaccurate?
- 6 How random is random, and does it matter?
- 7 Clusters and correlation as a difficulty
- 8 Sample size calculation
- 9 Opening paragraph
- 10 Randomized clinical trials
- 11 An RCT or a RCT?
- 12 Placebo versus Wait List - Serious Error on Page
- 13 Dr. Peters's comment on this article
What do we do about the inherent limitations? Recommendations?
Kudos on the limitations section, it is an accurate and concise enumeration of the consensus issues. Unfortunately, the sections on Randomization and Limitations are disjointed.
It is probably easier to focus on scientific limitations and not conflict of interest biases. To some extent, overcoming the limitations is about effective study design, which is probably too much to summarize here. Nonetheless, the authors of this page have hinted at suggestions with a very well written section on "Randomization".
Given our understanding of its limitations, isn't it time that we discourage references to RCT as a "gold standard?" It is a very fine tool, among many. It provides the best answer to a very specific question. But to get the right answer, we must ask the right question, and for many significant clinical (as well as social science) questions, RCT may not be the best tool at all. — Preceding unsigned comment added by 126.96.36.199 (talk) 14:59, 13 August 2013 (UTC)
Why are there empty sub-headings? Guardian 00:15, 14 July 2006 (UTC)
which to use in RCT, controlled vs control, is debatable. the number of google hits is 10x larger for the former so we should name it first as it is more widely accepted and understood. —The preceding unsigned comment was added by Mebden (talk • contribs) 10:16, 10 January 2007 (UTC).
- Not so fast!
- First, there are a number of inherent biases in what each particular search engine spits out (or claims to spit out). Not only that, but as Wikipedia is (perhaps unduly) influential in search engine hits, it can create a circular argument.
- Second, a "control test" is readily recognised as a phrase in which the word "control" acts as an noun adjunct (what used to be known as an 'adjectival noun'). A "randomised control test" is also a grammatically correct rendition of a "control test" involving randomisation.
- —DIV (188.8.131.52 (talk) 05:37, 4 March 2014 (UTC))
Just noticed that the whole section under Urn randomisation is an exact copy of the section before.DanHertogs 15:22, 19 April 2007 (UTC)
- It looks like it happened at this edit. I'll undo it. Good catch. Burlywood 18:42, 19 April 2007 (UTC)
Challanging content: In intro paras -I dont believe randomisation ensures equal allocation - you can still have unequal allocation of confounding factors if you are unlucky. Others agree? (I have made no edits)
- For any fixed covariate, the law of large numbers suggests that the method of random assignment has balance on that covariate, and large-deviation arguments suggest that the probability of great imbalance is very small. With lots of covariates (with complicated dependencies also), it is hard to say, but it is certainly plausible that randomization will fail to balance some covariates (say of 40 or more covariates, each of which some researcher may suspect of being confounding). In fact, there is substantial concern about covariate imbalance: See recent issues of Statistical Science, e.g. by Rosenberger or Paul Rosenbaum, etc.
- I changed the introduction, to avoid the problem indicated by the previous editor (while still keeping the main message, that randomization is a good thing)! Kiefer.Wolfowitz (talk) 22:41, 27 June 2009 (UTC)
How random is random, and does it matter?
Should there be a section discussing HOW random the random allocations are? There are big statistical differences between sampling a stochastic process, using a pseudorandom number generator with adequate cycle length, and calling RAND() in Excel ... And what impact on Phase I, II or III trials might these differences have? Is it standard practice for study designers to describe randomization techniques? Daen (talk) 14:17, 19 November 2008 (UTC)
Clusters and correlation as a difficulty
This article should mention the difficulties associated with having to randomise clusters of individuals. An example is when a single intervention must be used for all subjects at a particular location for some reason of practicality, e.g. interventions are methods of care which are randomised to clinics or hospitals, therefore patients are clustered. I've just created cluster randomised controlled trial to delve into that topic more deeply, as time permits. Tayste (edits) 22:56, 21 February 2010 (UTC)
This is actually a challenging topic. I suggest referencing the "status quo" methods with a real world example for these complex topics (clusters and correlation). ANOVA is often used to ask such questions: "what is the measured variance within a group VS what is the measured variance between groups". Even with these groupwise statistical tests, the problem is at least as hard as figuring out a way to "measure" the similarity between a single patient sample observation and the "expected" observation. The more criteria we add, the more we run into multiple hypothesis problems Type-I errors. In a nutshell: even the simple problem is hard and becomes much harder when you consider the vast number of ways to "measure" the "distance" between two patient "samples".
Correlation is a very broad topic. Are you trying to correlate (and thereby cluster) patient samples, study features, or whole populations? I have had to review these issues as part of a informatics doctoral thesis. The more complex the method the less likely it is to be adopted in a RCT (or clinical practice).
Sample size calculation
Should the article have a section on how to estimate the sample size that would be necessary in order to detect an effect of a given size? Or is that covered somewhere else in WP? Tayste (edits) 23:14, 21 February 2010 (UTC)
- I've made a few further revisions to conserve some key epidemiological concepts, such as the experimental character of this study design. On a separate point, I feel many of the distinctions made in the last paragraph of the lead (ie "The terms "RCT" and randomized trial are often used synonymously, but some authors distinguish between "RCTs" which compare treatment groups with control groups not receiving treatment (as in a placebo-controlled study), and "randomized trials" which can compare multiple treatment groups with each other. RCTs are sometimes known as randomized control trials. RCTs are also called randomized clinical trials or randomized controlled clinical trials when they concern clinical research;) are largely dated or WP:UNDUE (eg ref. 3). Consensus to reframe this paragraph and move some of the referenced material out of the lead? —MistyMorn (talk) 15:55, 6 May 2012 (UTC)
- As I said on your talk page, I appreciate your edits on this article. I think it reads much better now while retaining the key points. AS far as reframing the paragraph goes, you'll have no opposition from me, you're clearly competent with these matters. Regards, Tkenna (talk) 16:24, 6 May 2012 (UTC)
This is one of the most important medical articles in Wikipedia. People can't understand medicine without understanding randomized, controlled trials. If people can't understand the introduction, they'll never get through the rest of the entry.
And yet this reads like an academic paper, written for people who already know what a randomized controlled trial is, written to show off how many polysyllabic medical terms the writer knows. It defines "randomized controlled trial" with the term "clinical trial." If a reader doesn't know what a randomized controlled trial is, they're not likely to know what a clinical trial is either. Before you use a term like "clinical trial," you have to explain what it is. In fact, most readers who don't know what a randomized controlled trial is won't know what a "scientific experiment" is either. This introduction has to be completely rewritten in plain English, preferably with one or more WP:RSs to support it. I would suggest seeing how professional writers, like New York Times reporters, have done it, rather than trying to create a definition out of your own head.
WP:NOTJOURNAL "Scientific journals and research papers. A Wikipedia article should not be presented on the assumption that the reader is well versed in the topic's field. Introductory language in the lead and initial sections of the article should be written in plain terms and concepts that can be understood by any literate reader of Wikipedia without any knowledge in the given field before advancing to more detailed explanations of the topic. While wikilinks should be provided for advanced terms and concepts in that field, articles should be written on the assumption that the reader will not or cannot follow these links, instead attempting to infer their meaning from the text." --Nbauman (talk) 06:31, 24 March 2014 (UTC)
Randomized clinical trials
RCT may also refer to randomized clinical trials. How we can incorporate this information? And which article wold be better to incorporate this information. Is this the good article to put this information? --Abhijeet Safai (talk) 06:12, 25 November 2012 (UTC)
An RCT or a RCT?
- I vote for "an RCT" as when reading, I spell out the initials rather than the words they stand for. Maybe there's some guidance in the WP:MOS somewhere? Tayste (edits) 23:29, 30 April 2015 (UTC)
- This APA style page suggests "to use your ears (how the acronym is pronounced), not your eyes (how it's spelled)". Tayste (edits) 23:35, 30 April 2015 (UTC)
Placebo versus Wait List - Serious Error on Page
Currently this excellent article contains a serious error. It states that "groups receiving the experimental treatment are compared with control groups receiving no treatment (a placebo-controlled study) or a previously tested treatment (a positive-control study)."
A no treatment or wait list group is not a placebo group. A wait list group is a group that is treated (for ethical reasons) after the wait period. By way of contrast, a placebo group is a group that receives an inert substance or treatment, not no treatment. This engages the placebo response, the treatment effect of receiving something that elicits the expectation effect. The placebo effect is substantial, eliciting improvement of over 30% in most studies.
One other minor quibble; in my field (behavioral psychology) what the authors refer to as "a positive-control study" is usually called an "active treatment." Grenheldas (talk) 04:08, 6 May 2016 (UTC)Grenheldas
Dr. Peters's comment on this article
Dr. Peters has reviewed this Wikipedia page, and provided us with the following comments to improve its quality:
The article is mostly dealing with medical RCTs, which is of course fine since most of RCTs have taken place in the medical sector. Yet, in some parts, for example external validity, arguments that do not apply to medical studies but to social science studies are made. It would thus be better to structure the article in a way that this is more obvious. Two solutions are possible: Either address the differences between medical and social science RCTs already in the beginning and thus, discuss social and medical RCTs side by side in each section or splitting the article in two parts, with medical RCTs in the beginning and a section on social science RCTs in the end (as it is done to some extent at the moment but simply clearer).
Update to section 10: Randomized controlled trials in social science RCTs have recently gained attention in social sciences. In the field of economics, for example, a shift from theoretical studies to empirical work, particularly experiments, can be noted for the last decades (Hammermesh 2013). While the method is the same as in medical research, conducting RCTs in order to evaluate policy measures is different to medical RCTs when it comes to implementation. Several researchers have discussed these issues, which include, for example, choosing the right level of randomization, data collection or alternative randomization techniques (see, for example, Glennerster and Takavarasha 2013 or Duflo et al. 2008). Although RCTs have improved the internal validity of studies in the social science disciplines by minimizing selection bias in the last decade, they struggle with external validity, also in comparison to medical RCTs since issues like general equilibrium effects do not occur in medical RCTs. A recent systematic review by Peters, Langbein and Roberts (2016) analyzed 94 published articles in top economics journal between 2009 and 2014 and found that a majority of studies do not take external validity issues into account properly.
Update to section 10.2: International development section: RCTs have been applied in a number of topics throughout the world. A prominent example is the PROGRESA evaluation in Mexico, where conditional cash transfers were found to be beneficial on a number of levels for rural families and, based on the results of the RCT, the government introduced this as a policy (studies using PROGRESA are, among others, Attanasio et al. (2012) or Gertler (2004)). Other domains with evidence from a large array of interventions in developing countries include, among others, health (for example Miguel and Kremer 2003 or Dupas 2014), (micro-)finance sector (for example Tarozzi et al. (2014) or Karlan et al. (2014)) or education (for example Das et al. (2013) or Duflo et al. (2011, 2012).
Update to section 10.4 Education section: One of the first RCT in social science worldwide was the STAR experiment, which was started in 1985 and designed to determine the effect of small classes on short- and long-term pupil performance (for example Chetty et al. 2011).
Literature: Attanasio, O., Meghir, C. and Santiago, A. (2012). `Education Choices in Mexico: Using a Structural Model and a Randomized Experiment to Evaluate PROGRESA´, Review of Economic Studies, 79(1): 37-66. Chetty, R., Friedman, J. N., Hilger, N., Saez, E., Whitmore Schanzenbach, D. and Yagan, D. (2011). `How Does Your Kindergarten Classroom Affect Your Earnings? Evidence from Project Star´, Quarterly Journal of Economics, 126(4): 1593-1660. Das, J., Dercon, S., Habyarimana, J., Krishnan, P., Muralidharan, K. and Sundararaman, V. (2013). `School Inputs, Household Substitution, and Test Scores´, American Economic Journal: Applied Economics, 5(2): 29-57. Duflo, E., Dupas, P. and Kremer, M. (2011). `Peer Effects, Teacher Incentives, and the Impact of Tracking: Evidence from a Randomized Evaluation in Kenya´, American Economic Review, 101(5): 1739-1774. Duflo, E., Glennerster, R. and Kremer, M. (2008). `Using randomization in development economics research: a toolkit´, in (P. Schultz and J. Strauss, eds.), Handbook of Development Economics: 3895-3962, Amsterdam: North Holland. Duflo, E., Hanna, R. and Ryan, S. P. (2012). `Incentives Work: Getting Teachers to Come to School´, American Economic Review, 102(4): 1241-1278. Dupas, P. (2014). `Short-Run Subsidies and Long-Run Adoption of New Health Products: Evidence from a Field Experiment´, Econometrica, 82(1): 197-228. Gertler, P. (2004). `Do Conditional Cash Transfers Improve Child Health? Evidence from PROGRESA’s Control Randomized Experiment´, American Economic Review, 94(2): 336-341. Glennerster, R. and Takavarasha, K. (2013). `Running randomized evaluations – a practical guide´, Princeton University Press: Princeton and Oxford. Hamermesh, D.S. (2013). `Six Decades of Top Economics Publishing: Who and How?´, Journal of Economic Literature, 51 (1), 162-172. Karlan, D., Osei, R., Osei-Akoto, I. and Udry, C. (2014). `Agricultural Decisions after Relaxing Credit and Risk Constraints´, Quarterly Journal of Economics, 129(2): 597-652. Miguel, E. and Kremer, M. (2003). `Worms: Identifying Impacts on Education and Health in the Presence of Treatment Externalities´, Econometrica 72(1): 159-217. Peters, J., Langein, J. and Roberts, G. (2016). `Policy Evaluation, Randomized Controlled Trials, and External Validity – A Systematic Review´, Economics Letters, forthcoming. Discussion paper version published as: Ruhr Economic Papers 589: RWI. Tarozzi, A., Mahajan, A., Blackburn, B., Kopf, D., Krishnan, L. and Yoong, J. (2014). `Micro-loans, Insecticide-Treated Bednets, and Malaria: Evidence from a Randomized Controlled Trial in Orissa, India´, American Economic Review, 104(7): 1909-1941.
We hope Wikipedians on this talk page can take advantage of these comments and improve the quality of the article accordingly.
We believe Dr. Peters has expertise on the topic of this article, since he has published relevant scholarly research:
- Reference : Gunther Bensch & Jorg Peters, 2012. "A Recipe for Success? Randomized Free Distribution of Improved Cooking Stoves in Senegal," Ruhr Economic Papers 0325, Rheinisch-Westfalisches Institut fur Wirtschaftsforschung, Ruhr-Universitat Bochum, Universitat Dortmund, Universitat Duisburg-Essen.