Preregistration is the practice of registering a scientific study before it is conducted. The preregistration of studies can help to prevent publication bias, reduce data dredging, and identify otherwise undisclosed HARKing. It has gained prominence as a potential solution to some of the issues that are thought to underlie the replication crisis in science.
In the standard preregistration format, researchers prepare a research protocol document prior to conducting their research. Ideally, this document indicates the research hypotheses, sampling procedure, sample size, research design, testing conditions, stimuli, measures, data coding and aggregation method, criteria for data exclusions, and statistical analyses, including potential variations on those analyses. This preregistration document is then posted on a publicly available website such as the Open Science Framework or AsPredicted. The preregistered study is then conducted, and a report of the study and its results is submitted for publication together with access to the (anonymised) preregistration document. This preregistration approach allows peer reviewers and subsequent readers to cross-reference the preregistration document with the published research article in order to identify (a) any “exploratory” tests that were not included in the preregistration document and (b) any suppressed tests that were included in the preregistered protocol but excluded from the final research report.
The registered report format requires authors to submit a description of the study methods and analyses prior to data collection. Once the method and analysis plan is vetted through Stage 1 peer review, publication of the findings is provisionally guaranteed. The associated study is then conducted, and the research report is submitted to Stage 2 peer review. Stage 2 peer review confirms that the actual research methods are consistent with the preregistered protocol and that quality thresholds are met (e.g., manipulation checks confirm the validity of the experimental manipulation). Studies that pass Stage 2 peer review are then published regardless of whether the results are confirming or disconfirming, significant or nonsignificant.
Hence, both preregistration and registered reports involve creating a time-stamped non-modifiable public record of the study and analysis plan before the data is collected. However, the study and analysis plan is only subjected to a formal peer review before data collection in the case of registered reports.
Preregistration can be used in relation to a variety of different research designs, including:
- Qualitative research (Haven & Van Grootel, 2019)
- Preexisting data (Mertens & Krypotosm, 2019)
- Single case designs (Johnson & Cook, 2019)
- Exploratory research (Dirnagl, 2020)
Clinical trial registration
Clinical trial registration is the practice of documenting clinical trials before they are performed in a clinical trials registry so as to combat publication bias and selective reporting. Registration of clinical trials is required in some countries and is increasingly being standardized. Some top medical journals will only publish the results of trials that have been pre-registered.
A clinical trials registry is a platform which catalogs registered clinical trials. ClinicalTrials.gov, run by the United States National Library of Medicine (NLM) was the first online registry for clinical trials, and remains the largest and most widely used. In addition to combating bias, clinical trial registries serve to increase transparency and access to clinical trials for the public. Clinical trials registries are often searchable (e.g. by disease/indication, drug, location, etc.). Trials are registered by the pharmaceutical, biotech or medical device company (Sponsor) or by the hospital or foundation which is sponsoring the study, or by another organization, such as a contract research organization (CRO) which is running the study.
There has been a push from governments and international organizations, especially since 2005, to make clinical trial information more widely available and to standardize registries and processes of registering. The World Health Organization is working toward "achieving consensus on both the minimal and the optimal operating standards for trial registration".
Creation and development
For many years, scientists and others have worried about reporting biases such that negative or null results from initiated clinical trials may be less likely to be published than positive results, thus skewing the literature and our understanding of how well interventions work. This worry has been international and written about for over 50 years. One of the proposals to address this potential bias was a comprehensive register of initiated clinical trials that would inform the public which trials had been started. Ethical issues were those that seemed to interest the public most, as trialists (including those with potential commercial gain) benefited from those who enrolled in trials, but were not required to “give back,” telling the public what they had learned.
Those who were particularly concerned by the double standard were systematic reviewers, those who summarize what is known from clinical trials. If the literature is skewed, then the results of a systematic review are also likely to be skewed, possibly favoring the test intervention when in fact the accumulated data do not show this, if all data were made public.
ClinicalTrials.gov was originally developed largely as a result of breast cancer consumer lobbying, which led to authorizing language in the FDA Modernization Act of 1997 (Food and Drug Administration Modernization Act of 1997. Pub L No. 105-115, §113 Stat 2296), but the law provided neither funding nor a mechanism of enforcement. In addition, the law required that ClinicalTrials.gov only include trials of serious and life-threatening diseases.
Then, two events occurred in 2004 that increased public awareness of the problems of reporting bias. First, the then-New York State Attorney General Eliot Spitzer sued GlaxoSmithKline (GSK) because they had failed to reveal results from trials showing that certain antidepressants might be harmful.
Shortly thereafter, the International Committee of Medical Journal Editors (ICMJE) announced that their journals would not publish reports of trials unless they had been registered. The ICMJE action was probably the most important motivator for trial registration, as investigators wanted to reserve the possibility that they could publish their results in prestigious journals, should they want to.
In 2007, the Food and Drug Administration Amendments Act of 2007 (FDAAA) clarified the requirements for registration and also set penalties for non-compliance (Public Law 110-85. The Food and Drug Administration Amendments Act of 2007 .
The International Committee of Medical Journal Editors (ICMJE) decided that from July 1, 2005 no trials will be considered for publication unless they are included on a clinical trials registry. The World Health Organization has begun the push for clinical trial registration with the initiation of the International Clinical Trials Registry Platform. There has also been action from the pharmaceutical industry, which released plans to make clinical trial data more transparent and publicly available. Released in October 2008, the revised Declaration of Helsinki, states that "Every clinical trial must be registered in a publicly accessible database before recruitment of the first subject."
The World Health Organization maintains an international registry portal at http://apps.who.int/trialsearch/. WHO states that the international registry's mission is "to ensure that a complete view of research is accessible to all those involved in health care decision making. This will improve research transparency and will ultimately strengthen the validity and value of the scientific evidence base."
Since 2007, the International Committee of Medical Journal Editors ICMJE accepts all primary registries in the WHO network in addition to clinicaltrials.gov. Clinical trial registration in other registries excluding ClinicalTrials.gov has increased irrespective of study designs since 2014.
Overview of clinical trial registries
Worldwide, there is growing number of registries. A 2013 study identified the following top five registries (numbers updated as of August 2013):
|3.||Japan registries network (JPRN)||12,728|
|5.||Australia and New Zealand (ANZCTR)||8,216|
Over 200 journals offer a registered reports option (Centre for Open Science, 2019), and the number of journals that are adopting registered reports is approximately doubling each year (Chambers et al., 2019).
Psychological Science has encouraged the preregistration of studies and the reporting of effect sizes and confidence intervals. The editor-in-chief also noted that the editorial staff will be asking for replication of studies with surprising findings from examinations using small sample sizes before allowing the manuscripts to be published.
Nature Human Behaviour has adopted the registered report format, as it “shift[s] the emphasis from the results of research to the questions that guide the research and the methods used to answer them”.
European Journal of Personality defines this format: “In a registered report, authors create a study proposal that includes theoretical and empirical background, research questions/hypotheses, and pilot data (if available). Upon submission, this proposal will then be reviewed prior to data collection, and if accepted, the paper resulting from this peer-reviewed procedure will be published, regardless of the study outcomes.”
Note that only a very small proportion of academic journals in psychology and neurosciences explicitly stated that they welcome submissions of replication studies in their aim and scope or instructions to authors. This phenomenon does not encourage the reporting or even attempt on replication studies.
Several articles have outlined the rationale for preregistration (e.g., Lakens, 2019; Nosek et al., 2018; Wagenmakers et al., 2012). As Rubin (2020, Table 1) summarized, preregistration helps to identify the following issues:
- Poorly planned hypotheses and tests
- HARKing: undisclosed hypothesizing after the results are known
- The suppression of a priori hypotheses that yield null or disconfirming results
- Deviations from planned analyses
- Lack of clarity between confirmatory and exploratory analyses
- Undisclosed multiple testing
- Forking paths, in which researchers make decisions about which tests to conduct based on information from their sample
- p-hacking: continuing data analysis until a significant p value is obtained
- Optional stopping: repeating the same test at different stages of data collection until a significant result is obtained
- Invalid use of p values, because p values lose their meaning in exploratory analyses
- Researchers’ biases, including the confirmation bias and hindsight bias
- Selective reporting of results: “cherry-picking” specific supportive results and suppressing non-supportive results
- Unclear test severity, preventing the identification of hypotheses that have a low probability of being confirmed when they are false
- Unreported null findings
- Publication bias: unpublished null findings, resulting in the file draw problem
- Potentially low replicability, ostensibly due to the use of questionable research practices (e.g., HARKing, p-hacking, optional stopping)
Identifying issues such as these via preregistration helps to improve "the interpretability and credibility of research findings" (Nosek et al., 2018, p. 2605). However, Rubin (2020) argued that only some of these issues are problematic and only under some conditions. He also argued that, when they are problematic, preregistration is not necessary to identify these issues. Instead, they can be identified via (a) clear rationales for current hypotheses and analytical approaches, (b) public access to research data, materials, and code, and (c) demonstrations of the robustness of research conclusions to alternative interpretations and analytical approaches.
Some proponents of preregistration argue that it is "a method to increase the credibility of published results" (Nosek & Lakens, 2014), that it "makes your science better by increasing the credibility of your results" (Centre for Open Science), and that it "improves the interpretability and credibility of research findings" (Nosek et al., 2018, p. 2605). Others make the more modest claim that preregistration improves only the interpretability of research results and not necessarily their credibility (e.g., Lakens, 2019). Either way, critics have argued that preregistration may deter researchers from engaging in unplanned exploratory analyses (Coffman & Niederle, 2015), because it devalues exploratory analyses as being less "credible" and/or "interpretable" than preregistered confirmatory analyses. In response, preregistration advocates have pointed out that exploratory analyses are permitted and that they retain some value vis-a-vis hypothesis generation. Preregistration merely makes the distinction between confirmatory and exploratory analyses clearer (Nosek et al., 2018; Nosek & Lakens, 2014; Wagenmakers et al., 2012). Hence, preregistration is “a plan, not a prison” (Dehaven, 2017). However, critics counterargue that, if preregistration is merely a plan, and not a prison, then researchers should feel free to deviate from that plan and undertake non-preregistered exploratory hypothesis tests without fearing potential accusations of inappropriate research practices such as p-hacking and inflated familywise error rates (e.g., Navarro, 2020). They have further argued that preregistration is not the only way to identify inappropriate research practices. For example, p-hacking can be identified if researchers provide (a) publicly available data and research materials and (b) robustness or multiverse analyses (Rubin, 2020; Steegen et al., 2016; for several other approaches, see Srivastava, 2018). Finally, critics have argued that the key distinction between confirmatory and exploratory analyses is unclear and confusing (Devezer et al., 2020; Rubin, 2020; Szollosi & Donkin, 2019), and that concerns about inflated familywise error rates are unjustified when those error rates refer to abstract, atheoretical studywise hypotheses (Rubin, 2020; Szollosi et al., 2020).
There is also a concern that researchers rarely follow the exact research methods and analyses that they preregister. For example, a recent survey of 27 preregistered studies found that researchers deviated from their preregistered plans in all cases (Claesen et al., 2019). The most frequent deviations were with regards to the planned sample size, exclusion criteria, and statistical model. Hence, what were intended as preregistered confirmatory tests ended up as unplanned exploratory tests. Again, preregistration advocates argue that deviations from preregistered plans are acceptable as long as they are reported transparently and justified. But critics argue that it is more important to justify the current research approach relative to best practice in the area than it is to justify deviations from a preregistered research approach that may not necessarily reflect best practice. As Rubin (2020) explained, “we should be more interested in the rationale for the current method and analyses than in the rationale for historical changes that have led up to the current method and analyses” (pp. 378-379).
Finally, some commentators have argued that, under some circumstances, preregistration may actually harm science by providing a false sense of credibility to research studies (McPhetres, 2020; Szollosi et al., 2020). Consistent with this view, there is some evidence that researchers view registered reports as being more credible than standard reports on a range of dimensions (Soderberger et al., 2020; see also Field et al., 2020 for inconclusive evidence), although it is unclear whether this represents a "false" sense of credibility due to pre-existing positive community attitudes about preregistration or a genuine causal effect of registered reports on quality of research.
- "Registered Replication Reports". Association for Psychological Science. Retrieved 2015-11-13.
- Chambers, Chris. "Psychology's 'registration revolution'". the Guardian. Retrieved 2015-11-13.
- Christie Aschwanden (6 December 2018). "Psychology's Replication Crisis Has Made The Field Better". FiveThirtyEight.
- Haven, T. L.; Van Grootel, D. L. (2019). "Preregistering qualitative research". Accountability in Research. 26 (3): 229–244. doi:10.1080/08989621.2019.1580147. PMID 30741570.
- Mertens, G.; Krypotos, A. M. (2019). "Preregistration of analyses of preexisting data". Psychologica Belgica. 59 (1): 338–352. doi:10.5334/pb.493. PMID 31497308. S2CID 201844047.
- Johnson, A. H.; Cook, B. G. (2019). "Preregistration in single-case design research". Exceptional Children. 86 (1): 95–112. doi:10.1177/0014402919868529. S2CID 204363608.
- Dirnagl, U. (2020). "Preregistration of exploratory research: Learning from the golden age of discovery". PLoS Biol. 18 (3): e3000690. doi:10.1371/journal.pbio.3000690.
- "International Clinical Trials Registry Platform (ICTRP)". Who.int. Retrieved 2017-06-23.
- "WHO | Working Group on Best Practice for Clinical Trials Registers (BPG)". Who.int. Retrieved 2017-06-23.
- Barrett, Stephen. "Major Journals Press for Clinical Trial Registration". www.quackwatch.org. Retrieved 22 May 2019.
- "WHO - Working Group on Best Practice for Clinical Trials Registers (BPG)". www.who.int.
- Dickersin, K; Rennie, D (2009). "Registering clinical trials". JAMA. 290 (4): 516–523. doi:10.1001/jama.290.4.516. PMID 12876095. S2CID 10184671.
- Sterling, TD (1959). "Publication decisions and their possible effects on inferences drawn from tests of significances – or vice versa". J Am Stat Assoc. 54 (285): 30–34. doi:10.1080/01621459.1959.10501497. JSTOR 2282137.
- International Collaborative Group on Clinical Trial Registries (1993). "Position paper and consensus recommendations on clinical trial registries. Ad Hoc Working Party of the International Collaborative Group on Clinical Trials Registries". Clin Trials Metaanal. 28 (4–5): 255–266. PMID 10146333.
- Dickersin, K; Rennie, D (2012). "The evolution of trial registries and their use to assess the clinical trial enterprise". JAMA. 307 (17): 1861–4. doi:10.1001/jama.2012.4230. PMID 22550202.
- SANCTR. "SANCTR > Home". www.sanctr.gov.za.
- "Archived copy". Archived from the original on 2010-07-06. Retrieved 2010-07-23.CS1 maint: archived copy as title (link)
- "Archived copy". Archived from the original on 2011-08-30. Retrieved 2010-09-02.CS1 maint: archived copy as title (link)
- "ANZCTR". www.anzctr.org.au.
- Gülmezoglu, AM; Pang, T; Horton, R; Dickersin, K (2005). "WHO facilitates international collaboration in setting standards for clinical trial registration". Lancet. 365 (9474): 1829–1831. doi:10.1016/s0140-6736(05)66589-0. PMID 15924966. S2CID 29203085.
- "International Clinical Trials Registry Platform (ICTRP)". World Health Organization.
- Banno, M; Tsujimoto, Y; Kataoka, Y (2019). "Studies registered in non-ClinicalTrials.gov accounted for an increasing proportion of protocol registrations in medical research". Journal of Clinical Epidemiology. 116: 106–113. doi:10.1016/j.jclinepi.2019.09.005. PMID 31521723.
- Anderson, Monique L.; Chiswell, Karen; Peterson, Eric D.; Tasneem, Asba; Topping, James; Califf, Robert M. (12 March 2015). "Compliance with Results Reporting at ClinicalTrials.gov". New England Journal of Medicine. 372 (11): 1031–1039. doi:10.1056/NEJMsa1409364. PMC 4508873. PMID 25760355.
- DeVito, Nicholas J; Bacon, Seb; Goldacre, Ben (February 2020). "Compliance with legal requirement to report clinical trial results on ClinicalTrials.gov: a cohort study". The Lancet. 395 (10221): 361–369. doi:10.1016/S0140-6736(19)33220-9. PMID 31958402. S2CID 210704225.
- Pullar, T; Kumar, S; Feely, M (October 1989). "Compliance in clinical trials". Annals of the Rheumatic Diseases. 48 (10): 871–5. doi:10.1136/ard.48.10.871. PMC 1003898. PMID 2684057.
- Miller, Jennifer E; Korn, David; Ross, Joseph S (12 November 2015). "Clinical trial registration, reporting, publication and FDAAA compliance: a cross-sectional analysis and ranking of new drugs approved by the FDA in 2012". BMJ Open. 5 (11): e009758. doi:10.1136/bmjopen-2015-009758. PMC 4654354. PMID 26563214.
- Miseta, Ed (9 January 2018). "As ClinicalTrialsgov Turns 10 Will We See Compliance Improve". www.clinicalleader.com.
- Huser, V.; Cimino, J. J. (2013). "Evaluating adherence to the International Committee of Medical Journal Editors' policy of mandatory, timely clinical trial registration". Journal of the American Medical Informatics Association. 20 (e1): e169–74. doi:10.1136/amiajnl-2012-001501. PMC 3715364. PMID 23396544.
- Centre for Open Science. "Registered Reports: Peer review before results are known to align scientific values and practices".
- Chambers, C. D.; Forstmann, B.; Pruszynski, J. A. (2019). "Science in flux: Registered Reports and beyond at the European Journal of Neuroscience". European Journal of Neuroscience. 49 (1): 4–5. doi:10.1111/ejn.14319. PMID 30584679. S2CID 58645509.
- Lindsay, D. Stephen (2015-11-09). "Replication in Psychological Science". Psychological Science. 26 (12): 1827–32. doi:10.1177/0956797615616374. ISSN 0956-7976. PMID 26553013.
- Mellor, D. (2017). "Promoting reproducibility with registered reports". Nature Human Behaviour. 1: 0034. doi:10.1038/s41562-016-0034. S2CID 28976450.
- Yeung, Andy W. K. (2017). "Do Neuroscience Journals Accept Replications? A Survey of Literature". Frontiers in Human Neuroscience. 11: 468. doi:10.3389/fnhum.2017.00468. ISSN 1662-5161. PMC 5611708. PMID 28979201.
- Martin, G. N.; Clarke, Richard M. (2017). "Are Psychology Journals Anti-replication? A Snapshot of Editorial Practices". Frontiers in Psychology. 8: 523. doi:10.3389/fpsyg.2017.00523. ISSN 1664-1078. PMC 5387793. PMID 28443044.
- "Registered Reports Overview". Center for Open Science. Retrieved 2018-11-28.
- Wagenmakers, E. J.; Wetzels, R.; Borsboom, D.; van der Maas, H. L.; Kievit, R. A. (2012). "An agenda for purely confirmatory research". Perspectives on Psychological Science. 7 (6): 632–638. doi:10.1177/1745691612463078. PMID 26168122. S2CID 5096417.
- Lakens, D. (2019). "The value of preregistration for psychological science: A conceptual analysis" (PDF). Japanese Psychological Review. 62 (3): 221–230.
- Nosek, B. A.; Ebersole, C. R.; DeHaven, A. C.; Mellor, D. T. (2018). "The preregistration revolution". Proceedings of the National Academy of Sciences. 115 (11): 2600–2606. doi:10.1073/pnas.1708274114. PMID 29531091. S2CID 4639380.
- Rubin, M. (2020). "Does preregistration improve the credibility of research findings?". The Quantitative Methods for Psychology. 16 (4): 376–390. doi:10.20982/tqmp.16.4.p376. S2CID 221821323.
- Nosek, B. A.; Lakens, D. (2014). "Registered reports: A method to increase the credibility of published results". Social Psychology. 45 (3): 137–141. doi:10.1027/1864-9335/a000192.
- Coffman, L. C.; Niederle, M. (2015). "Pre-analysis plans have limited upside, especially where replications are feasible". Journal of Economic Perspectives. 29 (3): 81-98. doi:10.1257/jep.29.3.81.
- Dehaven, A. "Preregistration: A plan, not a prison". Centre for Open Science. Retrieved 25 September 2020.
- Steegen, S.; Tuerlinckx, F.; Gelman, A.; Vanpaemel, W. (2016). "Increasing transparency through a multiverse analysis". Perspectives on Psychological Science. 11 (5): 702–712. doi:10.1177/1745691616658637.
- Srivastava, S. (2018). "Sound inference in complicated research: A multi-strategy approach". PsyArXiv. doi:10.31234/osf.io/bwr48.
- Devezer, B.; Navarro, D. J.; Vandekerckhove, J.; Buzbas, E. O. (2020). "The case for formal methodology in scientific reform". doi:10.1101/2020.04.26.048306. S2CID 218466913. Cite journal requires
- Szollosi, A.; Donkin, C. (2019). "Arrested theory development: The misguided distinction between exploratory and confirmatory research". doi:10.31234/osf.io/suzej. Cite journal requires
- Szollosi, A.; Kellen, D.; Navarro, D. J.; Shiffrin, R.; van Rooji, I.; Van Zandt, T.; Donkin, C. (2020). "Is preregistration worthwhile?". Trends in Cognitive Sciences. 24 (2): 94–95. doi:10.1016/j.tics.2019.11.009. S2CID 209500379.
- Claesen, A.; Gomes, S.; Tuerlinckx, F.; Vanpaemel, W.; Leuven, K. U. (2019). "Preregistration: Comparing dream to reality". doi:10.31234/osf.io/d8wex. Cite journal requires
- McPhetres, J. (2020). "What should a preregistration contain?". doi:10.31234/osf.io/cj5mh. Cite journal requires
- Fields, S. M.; Wagenmakers, E. J.; Kiers, H. A.; Hoekstra, R.; Ernst, A.F.; van Ravenzwaaij, D. (2020). "The effect of preregistration on trust in empirical research findings: Results of a registered report". Royal Society Open Science. 7 (4): 181351. doi:10.1098/rsos.181351.
- Soderberg, C. K.; Errington, T. M.; Schiavone, S R.; Bottesini, J.; Singleton Thorn, F.; Vazire, S.; Esterling, K. M.; Nosek, B. A. (2020). "Research Quality of registered reports compared to the standard publishing model". doi:10.31222/osf.io/7x9vy. Cite journal requires