Preregistration is the practice of registering the hypotheses, methods, and/or analyses of a scientific study before it is conducted. Clinical trial registration is similar, although it may not require the registration of a study's analysis protocol. Finally, registered reports include the peer review and in principle acceptance of a study protocol prior to data collection.
Preregistration assists in the identification and/or reduction of a variety of potentially problematic research practices, including p-hacking, publication bias, data dredging, inappropriate forms of post hoc analysis, and (relatedly) HARKing. It has recently gained prominence in the open science community as a potential solution to some of the issues that are thought to underlie the replication crisis. However, critics have argued that it may not be necessary when other open science practices are implemented.
In the standard preregistration format, researchers prepare a research protocol document prior to conducting their research. Ideally, this document indicates the research hypotheses, sampling procedure, sample size, research design, testing conditions, stimuli, measures, data coding and aggregation method, criteria for data exclusions, and statistical analyses, including potential variations on those analyses. This preregistration document is then posted on a publicly available website such as the Open Science Framework or AsPredicted. The preregistered study is then conducted, and a report of the study and its results is submitted for publication together with access to the (anonymised) preregistration document. This preregistration approach allows peer reviewers and subsequent readers to cross-reference the preregistration document with the published research article in order to identify (a) any “exploratory” tests that were not included in the preregistration document and (b) any suppressed tests that were included in the preregistered protocol but excluded from the final research report.
The registered report format requires authors to submit a description of the study methods and analyses prior to data collection. Once the method and analysis plan is vetted through Stage 1 peer review, publication of the findings is provisionally guaranteed. The associated study is then conducted, and the research report is submitted to Stage 2 peer review. Stage 2 peer review confirms that the actual research methods are consistent with the preregistered protocol and that quality thresholds are met (e.g., manipulation checks confirm the validity of the experimental manipulation). Studies that pass Stage 2 peer review are then published regardless of whether the results are confirming or disconfirming, significant or nonsignificant.
Hence, both preregistration and registered reports involve creating a time-stamped non-modifiable public record of the study and analysis plan before the data is collected. However, the study and analysis plan is only subjected to a formal peer review before data collection in the case of registered reports.
Preregistration can be used in relation to a variety of different research designs and methods, including:
- Quantitative research in psychology (Bosnjak et al., 2021)
- Qualitative research (Haven & Van Grootel, 2019)
- Preexisting data (Mertens & Krypotosm, 2019; Weston et al., 2019)
- Single case designs (Johnson & Cook, 2019)
- Electroencephalogram research (Paul et al., 2021)
- Experience sampling (Kirtley et al., 2019)
- Exploratory research (Dirnagl, 2020)
Clinical trial registration
Clinical trial registration is the practice of documenting clinical trials before they are performed in a clinical trials registry so as to combat publication bias and selective reporting. Registration of clinical trials is required in some countries and is increasingly being standardized. Some top medical journals will only publish the results of trials that have been pre-registered.
A clinical trials registry is a platform which catalogs registered clinical trials. ClinicalTrials.gov, run by the United States National Library of Medicine (NLM) was the first online registry for clinical trials, and remains the largest and most widely used. In addition to combating bias, clinical trial registries serve to increase transparency and access to clinical trials for the public. Clinical trials registries are often searchable (e.g. by disease/indication, drug, location, etc.). Trials are registered by the pharmaceutical, biotech or medical device company (Sponsor) or by the hospital or foundation which is sponsoring the study, or by another organization, such as a contract research organization (CRO) which is running the study.
There has been a push from governments and international organizations, especially since 2005, to make clinical trial information more widely available and to standardize registries and processes of registering. The World Health Organization is working toward "achieving consensus on both the minimal and the optimal operating standards for trial registration".
Creation and development
For many years, scientists and others have worried about reporting biases such that negative or null results from initiated clinical trials may be less likely to be published than positive results, thus skewing the literature and our understanding of how well interventions work. This worry has been international and written about for over 50 years. One of the proposals to address this potential bias was a comprehensive register of initiated clinical trials that would inform the public which trials had been started. Ethical issues were those that seemed to interest the public most, as trialists (including those with potential commercial gain) benefited from those who enrolled in trials, but were not required to “give back,” telling the public what they had learned.
Those who were particularly concerned by the double standard were systematic reviewers, those who summarize what is known from clinical trials. If the literature is skewed, then the results of a systematic review are also likely to be skewed, possibly favoring the test intervention when in fact the accumulated data do not show this, if all data were made public.
ClinicalTrials.gov was originally developed largely as a result of breast cancer consumer lobbying, which led to authorizing language in the FDA Modernization Act of 1997 (Food and Drug Administration Modernization Act of 1997. Pub L No. 105-115, §113 Stat 2296), but the law provided neither funding nor a mechanism of enforcement. In addition, the law required that ClinicalTrials.gov only include trials of serious and life-threatening diseases.
Then, two events occurred in 2004 that increased public awareness of the problems of reporting bias. First, the then-New York State Attorney General Eliot Spitzer sued GlaxoSmithKline (GSK) because they had failed to reveal results from trials showing that certain antidepressants might be harmful.
Shortly thereafter, the International Committee of Medical Journal Editors (ICMJE) announced that their journals would not publish reports of trials unless they had been registered. The ICMJE action was probably the most important motivator for trial registration, as investigators wanted to reserve the possibility that they could publish their results in prestigious journals, should they want to.
In 2007, the Food and Drug Administration Amendments Act of 2007 (FDAAA) clarified the requirements for registration and also set penalties for non-compliance (Public Law 110-85. The Food and Drug Administration Amendments Act of 2007 .
The International Committee of Medical Journal Editors (ICMJE) decided that from July 1, 2005 no trials will be considered for publication unless they are included on a clinical trials registry. The World Health Organization has begun the push for clinical trial registration with the initiation of the International Clinical Trials Registry Platform. There has also been action from the pharmaceutical industry, which released plans to make clinical trial data more transparent and publicly available. Released in October 2008, the revised Declaration of Helsinki, states that "Every clinical trial must be registered in a publicly accessible database before recruitment of the first subject."
The World Health Organization maintains an international registry portal at http://apps.who.int/trialsearch/. WHO states that the international registry's mission is "to ensure that a complete view of research is accessible to all those involved in health care decision making. This will improve research transparency and will ultimately strengthen the validity and value of the scientific evidence base."
Since 2007, the International Committee of Medical Journal Editors ICMJE accepts all primary registries in the WHO network in addition to clinicaltrials.gov. Clinical trial registration in other registries excluding ClinicalTrials.gov has increased irrespective of study designs since 2014.
Overview of clinical trial registries
Worldwide, there is growing number of registries. A 2013 study identified the following top five registries (numbers updated as of August 2013):
|3.||Japan registries network (JPRN)||12,728|
|5.||Australia and New Zealand (ANZCTR)||8,216|
Over 200 journals offer a registered reports option (Centre for Open Science, 2019), and the number of journals that are adopting registered reports is approximately doubling each year (Chambers et al., 2019).
Psychological Science has encouraged the preregistration of studies and the reporting of effect sizes and confidence intervals. The editor-in-chief also noted that the editorial staff will be asking for replication of studies with surprising findings from examinations using small sample sizes before allowing the manuscripts to be published.
Nature Human Behaviour has adopted the registered report format, as it “shift[s] the emphasis from the results of research to the questions that guide the research and the methods used to answer them”.
European Journal of Personality defines this format: “In a registered report, authors create a study proposal that includes theoretical and empirical background, research questions/hypotheses, and pilot data (if available). Upon submission, this proposal will then be reviewed prior to data collection, and if accepted, the paper resulting from this peer-reviewed procedure will be published, regardless of the study outcomes.”
Note that only a very small proportion of academic journals in psychology and neurosciences explicitly stated that they welcome submissions of replication studies in their aim and scope or instructions to authors. This phenomenon does not encourage the reporting or even attempt on replication studies.
Several articles have outlined the rationale for preregistration (e.g., Lakens, 2019; Nosek et al., 2018; Wagenmakers et al., 2012). As Rubin (2020, Table 1) summarized, preregistration helps to identify and/or curtail the following issues:
- Poorly planned hypotheses and tests
- HARKing: undisclosed hypothesizing after the results are known
- The suppression of a priori hypotheses that yield null or disconfirming results
- Deviations from planned analyses
- Lack of clarity between confirmatory and exploratory analyses
- Undisclosed multiple testing
- Forking paths, in which researchers make decisions about which tests to conduct based on information from their sample
- p-hacking: continuing data analysis until a significant p value is obtained
- Optional stopping: repeating the same test at different stages of data collection until a significant result is obtained
- Invalid use of p values, because p values lose their meaning in exploratory analyses
- Researchers’ biases, including the confirmation bias and hindsight bias
- Selective reporting of results: “cherry-picking” specific supportive results and suppressing non-supportive results
- Unclear test severity, preventing the identification of hypotheses that have a low probability of being confirmed when they are false
- Unreported null findings
- Publication bias: unpublished null findings, resulting in the file draw problem
- Potentially low replicability, ostensibly due to the use of questionable research practices (e.g., HARKing, p-hacking, optional stopping)
Identifying issues such as these via preregistration helps to improve "the interpretability and credibility of research findings" (Nosek et al., 2018, p. 2605). However, Rubin (2020) argued that only some of these issues are problematic and only under some conditions. He also argued that, when they are problematic, preregistration is not necessary to identify these issues. Instead, they can be identified via (a) clear rationales for current hypotheses and analytical approaches, (b) public access to research data, materials, and code, and (c) demonstrations of the robustness of research conclusions to alternative interpretations and analytical approaches.
Proponents of preregistration have argued that it is "a method to increase the credibility of published results" (Nosek & Lakens, 2014), that it "makes your science better by increasing the credibility of your results" (Centre for Open Science), and that it "improves the interpretability and credibility of research findings" (Nosek et al., 2018, p. 2605). This argument assumes that non-preregistered exploratory analyses are less "credible" and/or "interpretable" than preregistered confirmatory analyses because they may involve "circular reasoning" in which post hoc hypotheses are based on the observed data (Nosek et al., 2018, p. 2600). However, critics have argued that preregistration is not necessary to identify circular reasoning during exploratory analyses (Rubin, 2020). Circular reasoning can be identified by analysing the reasoning per se without needing to know whether that reasoning was preregistered. Critics have also noted that the idea that preregistration improves research credibility may deter researchers from undertaking non-preregistered exploratory analyses (Coffman & Niederle, 2015; see also Collins et al., 2021, Study 1). In response, preregistration advocates have stressed that exploratory analyses are permitted in preregistered studies, and that the results of these analyses retain some value vis-a-vis hypothesis generation rather than hypothesis testing. Preregistration merely makes the distinction between confirmatory and exploratory research clearer (Nosek et al., 2018; Nosek & Lakens, 2014; Wagenmakers et al., 2012). Hence, although preregistraton is supposed to reduce researcher degrees of freedom during the data analysis stage, it is also supposed to be “a plan, not a prison” (Dehaven, 2017). However, critics counterargue that, if preregistration is only supposed to be a plan, and not a prison, then researchers should feel free to deviate from that plan and undertake exploratory analyses without fearing accusations of low research credibility due to circular reasoning and inappropriate research practices such as p-hacking and unreported multiple testing that leads to inflated familywise error rates (e.g., Navarro, 2020). Again, they have pointed out that preregistration is not necessary to address such concerns. For example, concerns about p-hacking and unreported multiple testing can be addressed if researchers engage in other open science practices, such as (a) open data and research materials and (b) robustness or multiverse analyses (Rubin, 2020; Steegen et al., 2016; for several other approaches, see Srivastava, 2018). Finally, and more fundamentally, critics have argued that the distinction between confirmatory and exploratory analyses is unclear and/or irrelevant (Devezer et al., 2020; Rubin, 2020; Szollosi & Donkin, 2019), and that concerns about inflated familywise error rates are unjustified when those error rates refer to abstract, atheoretical studywise hypotheses that are not being tested (Rubin, 2020, 2021; Szollosi et al., 2020).
There are also concerns about the practical implementation of preregistration. Many preregistered protocols leave plenty of room for p-hacking (Bakker et al., 2020; Heirene et al., 2021; Ikeda et al., 2019; Singh et al., 2021), and researchers rarely follow the exact research methods and analyses that they preregister (Abrams et al., 2020; Claesen et al., 2019; Heirene et al., 2021; see also Boghdadly et al., 2018; Singh et al., 2021; Sun et al., 2019). For example, a survey of 27 preregistered studies found that researchers deviated from their preregistered plans in all cases (Claesen et al., 2019). The most frequent deviations were with regards to the planned sample size, exclusion criteria, and statistical model. Hence, what were intended as preregistered confirmatory tests ended up as unplanned exploratory tests. Again, preregistration advocates argue that deviations from preregistered plans are acceptable as long as they are reported transparently and justified. They also point out that even vague preregistrations help to reduce researcher degrees of freedom and make any residual flexibility transparent (Simmons et al., 2021, p. 180). However, critics have argued that it is not useful to identify or justify deviations from preregistered plans when those plans do not reflect high quality theory and research practice. As Rubin (2020) explained, “we should be more interested in the rationale for the current method and analyses than in the rationale for historical changes that have led up to the current method and analyses” (pp. 378–379).
Finally, some commentators have argued that, under some circumstances, preregistration may actually harm science by providing a false sense of credibility to research studies and analyses (Devezer et al., 2020; McPhetres, 2020; Pham & Oh, 2020; Szollosi et al., 2020). Consistent with this view, there is some evidence that researchers view registered reports as being more credible than standard reports on a range of dimensions (Soderberg et al., 2020; see also Field et al., 2020 for inconclusive evidence), although it is unclear whether this represents a "false" sense of credibility due to pre-existing positive community attitudes about preregistration or a genuine causal effect of registered reports on quality of research.
- Nosek, B. A.; Ebersole, C. R.; DeHaven, A. C.; Mellor, D. T. (2018). "The preregistration revolution". Proceedings of the National Academy of Sciences. 115 (11): 2600–2606. doi:10.1073/pnas.1708274114. PMC 5856500. PMID 29531091. S2CID 4639380.
- "Registered Replication Reports". Association for Psychological Science. Retrieved 2015-11-13.
- Rubin, M. (2020). "Does preregistration improve the credibility of research findings?". The Quantitative Methods for Psychology. 16 (4): 376–390. doi:10.20982/tqmp.16.4.p376. S2CID 221821323.
- Bosnjak, M.; Fiebach, C. J.; Mellor, D.; Mueller, S.; O’Connor, D. B.; Oswald, F. L.; Sokol-Chang, R. I. (2021). "A template for preregistration of quantitative research in psychology: Report of the Joint Psychological Societies Preregistration Task Force". PsyArXiv. doi:10.31234/osf.io/d7m5r. S2CID 236655778.
- Haven, T. L.; Van Grootel, D. L. (2019). "Preregistering qualitative research". Accountability in Research. 26 (3): 229–244. doi:10.1080/08989621.2019.1580147. PMID 30741570.
- Mertens, G.; Krypotos, A. M. (2019). "Preregistration of analyses of preexisting data". Psychologica Belgica. 59 (1): 338–352. doi:10.5334/pb.493. PMC 6706998. PMID 31497308. S2CID 201844047.
- Weston, S. J.; Ritchie, S. J.; Rohrer, J. M. (2019). "Recommendations for increasing the transparency of analysis of preexisting data sets". Advances in Methods and Practices in Psychological Science. 2 (3): 214–227. doi:10.1177/2515245919848684. PMC 7079740. PMID 32190814.
- Johnson, A. H.; Cook, B. G. (2019). "Preregistration in single-case design research". Exceptional Children. 86 (1): 95–112. doi:10.1177/0014402919868529. S2CID 204363608.
- Paul, M.; Govaart, G. H.; Schettino, A. (2021). "Making ERP research more transparent: Guidelines for preregistration". International Journal of Psychophysiology. 164: 52–63. doi:10.31234/osf.io/4tgve. PMID 33676957.
- Kirtley, O. J.; Lafit, G.; Achterhof, R.; Hiekkaranta, A. P.; Myin-Germeys, I. (2019). "Making the black box transparent: A template and tutorial for (pre-)registration of studies using experience sampling methods (ESM)". PsyArXiv. doi:10.31234/osf.io/seyq7.
- Dirnagl, U. (2020). "Preregistration of exploratory research: Learning from the golden age of discovery". PLOS Biol. 18 (3): e3000690. doi:10.1371/journal.pbio.3000690. PMC 7098547. PMID 32214315.
- "International Clinical Trials Registry Platform (ICTRP)". Who.int. Retrieved 2017-06-23.
- "WHO | Working Group on Best Practice for Clinical Trials Registers (BPG)". Who.int. Retrieved 2017-06-23.
- Barrett, Stephen (13 September 2004). "Major Journals Press for Clinical Trial Registration". www.quackwatch.org. Retrieved 22 May 2019.
- "WHO - Working Group on Best Practice for Clinical Trials Registers (BPG)". www.who.int.
- Dickersin, K; Rennie, D (2009). "Registering clinical trials". JAMA. 290 (4): 516–523. doi:10.1001/jama.290.4.516. PMID 12876095. S2CID 10184671.
- Sterling, TD (1959). "Publication decisions and their possible effects on inferences drawn from tests of significances – or vice versa". J Am Stat Assoc. 54 (285): 30–34. doi:10.1080/01621459.1959.10501497. JSTOR 2282137.
- International Collaborative Group on Clinical Trial Registries (1993). "Position paper and consensus recommendations on clinical trial registries. Ad Hoc Working Party of the International Collaborative Group on Clinical Trials Registries". Clin Trials Metaanal. 28 (4–5): 255–266. PMID 10146333.
- Dickersin, K; Rennie, D (2012). "The evolution of trial registries and their use to assess the clinical trial enterprise". JAMA. 307 (17): 1861–4. doi:10.1001/jama.2012.4230. PMID 22550202.
- SANCTR. "SANCTR > Home". www.sanctr.gov.za.
- "Archived copy". Archived from the original on 2010-07-06. Retrieved 2010-07-23.CS1 maint: archived copy as title (link)
- "Archived copy". Archived from the original on 2011-08-30. Retrieved 2010-09-02.CS1 maint: archived copy as title (link)
- "ANZCTR". www.anzctr.org.au.
- Gülmezoglu, AM; Pang, T; Horton, R; Dickersin, K (2005). "WHO facilitates international collaboration in setting standards for clinical trial registration". Lancet. 365 (9474): 1829–1831. doi:10.1016/s0140-6736(05)66589-0. PMID 15924966. S2CID 29203085.
- "International Clinical Trials Registry Platform (ICTRP)". World Health Organization.
- Banno, M; Tsujimoto, Y; Kataoka, Y (2019). "Studies registered in non-ClinicalTrials.gov accounted for an increasing proportion of protocol registrations in medical research". Journal of Clinical Epidemiology. 116: 106–113. doi:10.1016/j.jclinepi.2019.09.005. PMID 31521723.
- Anderson, Monique L.; Chiswell, Karen; Peterson, Eric D.; Tasneem, Asba; Topping, James; Califf, Robert M. (12 March 2015). "Compliance with Results Reporting at ClinicalTrials.gov". New England Journal of Medicine. 372 (11): 1031–1039. doi:10.1056/NEJMsa1409364. PMC 4508873. PMID 25760355.
- DeVito, Nicholas J; Bacon, Seb; Goldacre, Ben (February 2020). "Compliance with legal requirement to report clinical trial results on ClinicalTrials.gov: a cohort study". The Lancet. 395 (10221): 361–369. doi:10.1016/S0140-6736(19)33220-9. PMID 31958402. S2CID 210704225.
- Pullar, T; Kumar, S; Feely, M (October 1989). "Compliance in clinical trials". Annals of the Rheumatic Diseases. 48 (10): 871–5. doi:10.1136/ard.48.10.871. PMC 1003898. PMID 2684057.
- Miller, Jennifer E; Korn, David; Ross, Joseph S (12 November 2015). "Clinical trial registration, reporting, publication and FDAAA compliance: a cross-sectional analysis and ranking of new drugs approved by the FDA in 2012". BMJ Open. 5 (11): e009758. doi:10.1136/bmjopen-2015-009758. PMC 4654354. PMID 26563214.
- Miseta, Ed (9 January 2018). "As ClinicalTrialsgov Turns 10 Will We See Compliance Improve". www.clinicalleader.com.
- Huser, V.; Cimino, J. J. (2013). "Evaluating adherence to the International Committee of Medical Journal Editors' policy of mandatory, timely clinical trial registration". Journal of the American Medical Informatics Association. 20 (e1): e169–74. doi:10.1136/amiajnl-2012-001501. PMC 3715364. PMID 23396544.
- Centre for Open Science. "Registered Reports: Peer review before results are known to align scientific values and practices".
- Chambers, C. D.; Forstmann, B.; Pruszynski, J. A. (2019). "Science in flux: Registered Reports and beyond at the European Journal of Neuroscience". European Journal of Neuroscience. 49 (1): 4–5. doi:10.1111/ejn.14319. PMID 30584679. S2CID 58645509.
- Lindsay, D. Stephen (2015-11-09). "Replication in Psychological Science". Psychological Science. 26 (12): 1827–32. doi:10.1177/0956797615616374. ISSN 0956-7976. PMID 26553013.
- Mellor, D. (2017). "Promoting reproducibility with registered reports". Nature Human Behaviour. 1: 0034. doi:10.1038/s41562-016-0034. S2CID 28976450.
- "Streamlined review and registered reports soon to be official at EJP".
- Yeung, Andy W. K. (2017). "Do Neuroscience Journals Accept Replications? A Survey of Literature". Frontiers in Human Neuroscience. 11: 468. doi:10.3389/fnhum.2017.00468. ISSN 1662-5161. PMC 5611708. PMID 28979201.
- Martin, G. N.; Clarke, Richard M. (2017). "Are Psychology Journals Anti-replication? A Snapshot of Editorial Practices". Frontiers in Psychology. 8: 523. doi:10.3389/fpsyg.2017.00523. ISSN 1664-1078. PMC 5387793. PMID 28443044.
- "Registered Reports Overview". Center for Open Science. Retrieved 2018-11-28.
- Wagenmakers, E. J.; Wetzels, R.; Borsboom, D.; van der Maas, H. L.; Kievit, R. A. (2012). "An agenda for purely confirmatory research". Perspectives on Psychological Science. 7 (6): 632–638. doi:10.1177/1745691612463078. PMID 26168122. S2CID 5096417.
- Lakens, D. (2019). "The value of preregistration for psychological science: A conceptual analysis" (PDF). Japanese Psychological Review. 62 (3): 221–230.
- Nosek, B. A.; Lakens, D. (2014). "Registered reports: A method to increase the credibility of published results". Social Psychology. 45 (3): 137–141. doi:10.1027/1864-9335/a000192.
- Coffman, L. C.; Niederle, M. (2015). "Pre-analysis plans have limited upside, especially where replications are feasible". Journal of Economic Perspectives. 29 (3): 81–98. doi:10.1257/jep.29.3.81.
- Collins, H.K.; Whillans, A. V.; John, L. K (2021). "Joy and rigor in behavioral science". Organizational Behavior and Human Decision Processes. 164: 179–191. doi:10.1016/j.obhdp.2021.03.002. S2CID 234848511.
- Dehaven, A. "Preregistration: A plan, not a prison". Centre for Open Science. Retrieved 25 September 2020.
- Steegen, S.; Tuerlinckx, F.; Gelman, A.; Vanpaemel, W. (2016). "Increasing transparency through a multiverse analysis". Perspectives on Psychological Science. 11 (5): 702–712. doi:10.1177/1745691616658637. PMID 27694465.
- Srivastava, S. (2018). "Sound inference in complicated research: A multi-strategy approach". PsyArXiv. doi:10.31234/osf.io/bwr48.
- Devezer, B.; Navarro, D. J.; Vandekerckhove, J.; Buzbas, E. O. (2020). "The case for formal methodology in scientific reform". bioRxiv: 2020.04.26.048306. doi:10.1101/2020.04.26.048306. S2CID 218466913.
- Szollosi, A.; Donkin, C. (2019). "Arrested theory development: The misguided distinction between exploratory and confirmatory research". doi:10.31234/osf.io/suzej. Cite journal requires
- Szollosi, A.; Kellen, D.; Navarro, D. J.; Shiffrin, R.; van Rooji, I.; Van Zandt, T.; Donkin, C. (2020). "Is preregistration worthwhile?". Trends in Cognitive Sciences. 24 (2): 94–95. doi:10.1016/j.tics.2019.11.009. PMID 31892461. S2CID 209500379.
- Rubin, Mark (2021). "When to adjust alpha during multiple testing: A consideration of disjunction, conjunction, and individual testing". Synthese. arXiv:2107.02947. doi:10.1007/s11229-021-03276-4. S2CID 235755301.
- Bakker, M.; Veldkamp, C. L. S.; van Assen, M. A. L. M.; Crompvoets, E. A. V.; Ong, H. H.; Nosek, B.; Soderberg, C. K.; Mellor, D.; Wicherts, J. M. (2020). "Ensuring the quality and specificity of preregistrations". PLOS Biol. 18 (12): e3000937. doi:10.1371/journal.pbio.3000937. PMC 7725296. PMID 33296358.
- Ikeda, A.; Xu, H.; Fuji, N.; Zhu, S.; Yamada, Y. (2019). "Questionable research practices following pre-registration". Japanese Psychological Review. 62 (3): 281–295.
- Singh, B.; Fairman, C. M.; Christensen, J. F.; Bolam, K. A.; Twomey, R.; Nunan, D.; Lahart, I. M. (2021). "Outcome reporting bias in exercise oncology trials (OREO): A cross-sectional study". medRxiv: 2021.03.12.21253378. doi:10.1101/2021.03.12.21253378. S2CID 232226715.
- Heirene, R.; LaPlante, D.; Louderback, E. R.; Keen, B.; Bakker, M.; Serafimovska, A.; Gainsbury, S. M. "Preregistration specificity & adherence: A review of preregistered gambling studies & cross-disciplinary comparison". PsyArXiv. Retrieved 17 July 2021.
- Abrams, E.; Libgober, J.; List, J. A. (2020). "Research registries: Facts, myths, and possible improvements" (PDF). NBER Working Papers. 27250.
- Claesen, A.; Gomes, S.; Tuerlinckx, F.; Vanpaemel, W.; Leuven, K. U. (2019). "Preregistration: Comparing dream to reality". doi:10.31234/osf.io/d8wex. Cite journal requires
- Boghdadly, K. El.; Wiles, M. D.; Atton, S.; Bailey, C. R. (2018). "Adherence to guidance on registration of randomised controlled trials published in Anaesthesia". Anaesthesia. 73 (5): 556–563. doi:10.1111/anae.14103. PMID 29292498.
- Sun, L. W.; Lee, D. J.; Collins, J. A.; Carll, T. C.; Ramahi, K.; Sandy, S. J.; Unteriner, J. G.; Weinberg, D. V. (2019). "Assessment of consistency between peer-reviewed publications and clinical trial registries". JAMA Ophthalmology. 137 (5): 552–556. doi:10.1001/jamaophthalmol.2019.0312. PMC 6512264. PMID 30946427.
- Simmons, J. P.; Nelson, L. D.; Simonsohn, U. (2021). "Pre-registration is a game changer. But, like random assignment, it is neither necessary nor sufficient for credible science". Journal of Consumer Psychology. 31 (1): 177–180. doi:10.1002/jcpy.1207. S2CID 230629031.
- McPhetres, J. (2020). "What should a preregistration contain?". doi:10.31234/osf.io/cj5mh. Cite journal requires
- Pham, M. T.; Oh, T. T. (2020). "Preregistration is neither sufficient nor necessary for good science". Journal of Consumer Psychology. 31: 163–176. doi:10.1002/jcpy.1209.
- Field, S. M.; Wagenmakers, E. J.; Kiers, H. A.; Hoekstra, R.; Ernst, A.F.; van Ravenzwaaij, D. (2020). "The effect of preregistration on trust in empirical research findings: Results of a registered report". Royal Society Open Science. 7 (4): 181351. Bibcode:2020RSOS....781351F. doi:10.1098/rsos.181351. PMC 7211853. PMID 32431853.
- Soderberg, C. K.; Errington, T. M.; Schiavone, S R.; Bottesini, J.; Singleton Thorn, F.; Vazire, S.; Esterling, K. M.; Nosek, B. A. (2020). "Research Quality of registered reports compared to the standard publishing model". doi:10.31222/osf.io/7x9vy. Cite journal requires