Metascience
Metascience (also known as meta-research or evidence-based research) is the use of scientific methodology to study science itself. Metascience seeks to increase the quality of scientific research while reducing waste. It is also known as "research on research" and "the science of science", as it uses research methods to study how research is done and where improvements can be made. Metascience concerns itself with all fields of research and has been described as "a bird's eye view of science."[1] In the words of John Ioannidis, "Science is the best thing that has happened to human beings ... but we can do it better."[2]
In 1966, an early meta-research paper examined the statistical methods of 295 papers published in ten high-profile medical journals. It found that, "in almost 73% of the reports read ... conclusions were drawn when the justification for these conclusions was invalid." Meta-research in the following decades found many methodological flaws, inefficiencies, and poor practices in research across numerous scientific fields. Many scientific studies could not be reproduced, particularly in medicine and the soft sciences. The term "replication crisis" was coined in the early 2010s as part of a growing awareness of the problem.[3]
Measures have been implemented to address the issues revealed by metascience. These measures include the pre-registration of scientific studies and clinical trials as well as the founding of organizations such as CONSORT and the EQUATOR Network that issue guidelines for methodology and reporting. There are continuing efforts to reduce the misuse of statistics, to eliminate perverse incentives from academia, to improve the peer review process, to combat bias in scientific literature, and to increase the overall quality and efficiency of the scientific process.
Part of a series on |
Evidence-based practices |
---|
|
History
In 1966, an early meta-research paper examined the statistical methods of 295 papers published in ten high-profile medical journals. It found that, "in almost 73% of the reports read ... conclusions were drawn when the justification for these conclusions was invalid."[5] In 2005, John Ioannidis published a paper titled "Why Most Published Research Findings Are False", which argued that a majority a papers in the medical field produce conclusions that are wrong.[4] The paper went on the become the most downloaded paper in the Public Library of Science[6][7] and is considered foundational to the field of metascience.[8] Later meta-research identified widespread difficulty in replicating results in many scientific fields, including psychology and medicine. This problem was termed "the replication crisis". Metascience has grown as a reaction to the replication crisis and to concerns about waste in research.[9]
Many prominent publishers are interested in meta-research and in improving the quality of their publications. Top journals such as Science, The Lancet, and Nature, provide ongoing coverage of meta-research and problems with reproducibility.[10] In 2012 PLOS ONE launched a Reproducibility Initiative. In 2015 Biomed Central introduced a minimum-standards-of-reporting checklist to four titles.
The first international conference in the broad area of meta-research was the Research Waste/EQUATOR conference held in Edinburgh in 2015; the first international conference on peer review was the Peer Review Congress held in 1989.[11] In 2016, Research Integrity and Peer Review was launched. The journal's opening editorial called for "research that will increase our understanding and suggest potential solutions to issues related to peer review, study reporting, and research and publication ethics".[12]
Areas of meta-research
Metascience can be categorize into five major areas of interest: Methods, Reporting, Reproducibility, Evaluation, and Incentives. These correspond, respectively, with how to perform, communicate, verify, evaluate, and reward research.[13]
Methods
Metascience seeks to identify poor research practices, including biases in research, poor study design, abuse of statistics, and to find methods to reduce these practices.[13] Meta-research has identified numerous biases in scientific literature.[14] Of particular note is the widespread misuse of p-values and abuse of statistical significance.[15]
Reporting
Meta-research has identified poor practices in reporting, explaining, disseminating and popularizing research, particularly within the social and health sciences. Poor reporting makes it difficult to accurately interpret the results of scientific studies, to replicate studies, and to identify biases and conflicts of interest in the authors. Solutions include the implementation of reporting standards, and greater transparency in scientific studies (including better requirements for disclosure of conflicts of interest). There is an attempt to standardize reporting of data and methodology through the creation of guidelines by reporting agencies such as CONSORT and the larger EQUATOR Network.[13]
Reproducibility
The replication crisis is an ongoing methodological crisis in which it has been found that many scientific studies are difficult or impossible to replicate.[16][17] While the crisis has its roots in the meta-research of the mid- to late-1900s, the phrase "replication crisis" was not coined until the early 2010s[3] as part of a growing awareness of the problem.[13] The replication crisis particularly affects psychology (especially social psychology) and medicine.[18][19] Replication is an essential part of the scientific process, and the widespread failure of replication puts into question the reliability of affected fields.[20]
Moreover, replication of research (or failure to replicate) is considered less influential than original research, and is less likely to be published in many fields. This discourages the reporting of, and even attempts to replicate, studies.[21][22]
Evaluation
Metascience seeks to create a scientific foundation for peer review. Meta-research evaluates peer review systems including pre-publication peer review, post-publication peer review, and open peer review. It also seeks to develop better research funding criteria.[13]
Incentives
Metascience seeks to promote better research through better incentive systems. This includes studying the accuracy, effectiveness, costs, and benefits of different approaches to ranking and evaluating research and those who perform it.[13] Critics argue that perverse incentives have created a publish-or-perish environment in academia which promotes the production of junk science, low quality research, and false positives.[23][24] According to Brian Nosek, “The problem that we face is that the incentive system is focused almost entirely on getting research published, rather than on getting research right.”[25] Proponents of reform seek to structure the incentive system to favor higher-quality results.[26]
Reforms
Meta-research identifying flaws in scientific practice has inspired reforms in science. These reforms seek to address and fix problems in scientific practice which lead to low-quality or inefficient research.
Pre-registration
The practice of registering a scientific study before it is conducted is called pre-registration. It arose as a means to address the replication crisis. Pregistration requires the submission of a registered report, which is then accepted for publication or rejected by a journal based on theoretical justification, experimental design, and the proposed statistical analysis. Pre-registration of studies serves to prevent publication bias, reduce data dredging, and increase replicability.[27][28]
Reporting standards
Studies showing poor consistency and quality of reporting have demonstrated the need for reporting standards and guidelines in science, which has led to the rise of organisations that produce such standards, such as CONSORT (Consolidated Standards of Reporting Trials) and the EQUATOR Network.
The EQUATOR (Enhancing the QUAlity and Transparency Of health Research)[29] Network is an international initiative aimed at promoting transparent and accurate reporting of health research studies to enhance the value and reliability of medical research literature.[30] The EQUATOR Network was established with the goals of raising awareness of the importance of good reporting of research, assisting in the development, dissemination and implementation of reporting guidelines for different types of study designs, monitoring the status of the quality of reporting of research studies in the health sciences literature, and conducting research relating to issues that impact the quality of reporting of health research studies.[31] The Network acts as an "umbrella" organisation, bringing together developers of reporting guidelines, medical journal editors and peer reviewers, research funding bodies, and other key stakeholders with a mutual interest in improving the quality of research publications and research itself.
Applications
Medicine
Clinical research in medicine is often of low quality, and many studies cannot be replicated.[32][33] An estimated 85% of research funding is wasted.[34] Additionally, the presence of bias affects research quality.[35] The pharmaceutical industry exerts substantial influence on the design and execution of medical research. Conflicts of interest are common among authors of medical literature[36] and among editors of medical journals. While almost all medical journals require their authors to disclose conflicts of interest, editors are not required to do so.[37] Financial conflicts of interest have been linked to higher rates of positive study results. In antidepressant trials, pharmaceutical sponsorship is the best predictor of trial outcome.[38]
Blinding is another focus of meta-research, as error caused by poor blinding is a source of experimental bias. Blinding is not well reported in medical literature, and widespread misunderstanding of the subject has resulted in poor implementation of blinding in clinical trials.[39] Furthermore, failure of blinding is rarely measured or reported.[40] Research showing the failure of blinding in antidepressant trials has led some scientists to argue that antidepressants are no better than placebo.[41][42] In light of meta-research showing failures of blinding, CONSORT standards recommend that all clinical trials assess and report the quality of blinding.[43]
Studies have shown that systematic reviews of existing research evidence are sub-optimally used in planning a new research or summarizing the results.[44] Cumulative meta-analyses of studies evaluating the effectiveness of medical interventions have shown that many clinical trials could have been avoided if a systematic review of existing evidence was done prior to conducting a new trial.[45][46][47] For example, Lau et al.[45] analyzed 33 clinical trials (involving 36974 patients) evaluating the effectiveness of intravenous streptokinase for acute myocardial infarction. Their cumulative meta-analysis demonstrated that 25 of 33 trials could have been avoided if a systematic review was conducted prior to conducting a new trial. In other words, randomizing 34542 patients was potentially unnecessary. One study[48] analyzed 1523 clinical trials included in 227 meta-analyses and concluded that "less than one quarter of relevant prior studies" were cited. They also confirmed earlier findings that most clinical trial reports do not present systematic review to justify the research or summarize the results.[48]
Many treatments used in modern medicine have been proven to be ineffective, or even harmful. A 2007 study by John Ioannidis found that it took an average of ten years for the medical community to stop referencing popular practices after their efficacy was unequivocally disproven.[49][50]
Psychology
Metascience has revealed significant problems in psychological research. The field suffers from high bias, low reproducibility, and widespread misuse of statistics.[51][52][53] The replication crisis affects psychology more strongly than any other field; as many as two-thirds of highly publicized findings may be impossible to replicate.[54] Meta-research finds that 80-95% of psychological studies support their initial hypotheses, which strongly implies the existence of publication bias.[55]
The replication crisis has led to renewed efforts to re-test important findings.[56][57] In response to concerns about publication bias and p-hacking, more than 140 psychology journals have adopted result-blind peer review, in which studies are pre-registered and published without regard for their outcome.[58] An analysis of these reforms estimated that 61 percent of result-blind studies produce null results, in contrast with 5 to 20 percent in earlier research. This analysis shows that result-blind peer review substantially reduces publication bias.[55]
Psychologists routinely confuse statistical significance with practical importance, enthusiastically reporting great certainty in unimportant facts.[59] Some psychologists have responded with an increased use of effect size statistics, rather than sole reliance on the p values.[citation needed]
Physics
Richard Feynman noted that estimates of physical constants were closer to published values than would be expected by chance. This was believed to be the result of confirmation bias: results that agreed with existing literature were more likely to be believed, and therefore published. Physicists now implement blinding to prevent this kind of bias.[60]
Associated fields
Journalology
Journalology, also known as publication science, is the scholarly study of all aspects of the academic publishing process.[61][62] The field seeks to improve the quality of scholarly research by implementing evidence-based practices in academic publishing.[63] The term "journalology" was coined by Stephen Lock, the former editor-in-chief of the BMJ. The first Peer Review Congress, held in 1989 in Chicago, Illinois, is considered a pivotal moment in the founding of journalology as a distinct field.[63] The field of journalology has been influential in pushing for study pre-registration in science, particularly in clinical trials. Clinical-trial registration is now expected in most countries.[63]
Scientometrics
Scientometrics concerns itself with measuring bibliographic data in scientific publications. Major research issues include the measurement of the impact of research papers and academic journals, the understanding of scientific citations, and the use of such measurements in policy and management contexts.[64]
Scientific data science
Scientific data science is the use of data science to analyse research papers. It encompasses both qualitative and quantitative methods. Research in scientific data science includes fraud detection[65] and citation network analysis.[66]
See also
References
- ^ Ioannidis, John P. A.; Fanelli, Daniele; Dunne, Debbie Drake; Goodman, Steven N. (2015-10-02). "Meta-research: Evaluation and Improvement of Research Methods and Practices". PLOS Biology. 13 (10): –1002264. doi:10.1371/journal.pbio.1002264. ISSN 1545-7885. PMC 4592065. PMID 26431313.
{{cite journal}}
: CS1 maint: unflagged free DOI (link) - ^ Bach, Author Becky (8 December 2015). "On communicating science and uncertainty: A podcast with John Ioannidis". Scope. Retrieved 20 May 2019.
{{cite web}}
:|first1=
has generic name (help) - ^ a b Pashler, Harold; Wagenmakers, Eric Jan (2012). "Editors' Introduction to the Special Section on Replicability in Psychological Science: A Crisis of Confidence?". Perspectives on Psychological Science. 7 (6): 528–530. doi:10.1177/1745691612465253. PMID 26168108. S2CID 26361121.
- ^ a b Ioannidis, JP (August 2005). "Why most published research findings are false". PLOS Medicine. 2 (8): e124. doi:10.1371/journal.pmed.0020124. PMC 1182327. PMID 16060722.
{{cite journal}}
: CS1 maint: unflagged free DOI (link) - ^ Schor, Stanley (1966). "Statistical Evaluation of Medical Journal Manuscripts". JAMA: The Journal of the American Medical Association. 195 (13): 1123–8. doi:10.1001/jama.1966.03100130097026. ISSN 0098-7484. PMID 5952081.
- ^ "Highly Cited Researchers". Retrieved September 17, 2015.
- ^ Medicine - Stanford Prevention Research Center. John P.A. Ioannidis
- ^ Robert Lee Hotz (September 14, 2007). "Most Science Studies Appear to Be Tainted By Sloppy Analysis". Wall Street Journal. Dow Jones & Company. Retrieved 2016-12-05.
- ^ "Researching the researchers". Nature Genetics. 46 (5): 417. 2014. doi:10.1038/ng.2972. ISSN 1061-4036. PMID 24769715.
- ^ Enserink, Martin (2018). "Research on research". Science. 361 (6408): 1178–1179. Bibcode:2018Sci...361.1178E. doi:10.1126/science.361.6408.1178. ISSN 0036-8075. PMID 30237336.
- ^ Rennie, Drummond (1990). "Editorial Peer Review in Biomedical Publication". JAMA. 263 (10): 1317–1441. doi:10.1001/jama.1990.03440100011001. ISSN 0098-7484. PMID 2304208.
- ^ Harriman, Stephanie L.; Kowalczuk, Maria K.; Simera, Iveta; Wager, Elizabeth (2016). "A new forum for research on research integrity and peer review". Research Integrity and Peer Review. 1 (1): 5. doi:10.1186/s41073-016-0010-y. ISSN 2058-8615. PMC 5794038. PMID 29451544.
{{cite journal}}
: CS1 maint: unflagged free DOI (link) - ^ a b c d e f Ioannidis, John P. A.; Fanelli, Daniele; Dunne, Debbie Drake; Goodman, Steven N. (2 October 2015). "Meta-research: Evaluation and Improvement of Research Methods and Practices". PLOS Biology. 13 (10): e1002264. doi:10.1371/journal.pbio.1002264. ISSN 1544-9173. PMC 4592065. PMID 26431313.
{{cite journal}}
: CS1 maint: unflagged free DOI (link) - ^ Fanelli, Daniele; Costas, Rodrigo; Ioannidis, John P. A. (2017). "Meta-assessment of bias in science". Proceedings of the National Academy of Sciences of the United States of America. 114 (14): 3714–3719. doi:10.1073/pnas.1618569114. ISSN 1091-6490. PMC 5389310. PMID 28320937. Retrieved 11 June 2019.
- ^ Check Hayden, Erika (2013). "Weak statistical standards implicated in scientific irreproducibility". Nature. doi:10.1038/nature.2013.14131. Retrieved 9 May 2019.
- ^ Schooler, J. W. (2014). "Metascience could rescue the 'replication crisis'". Nature. 515 (7525): 9. Bibcode:2014Natur.515....9S. doi:10.1038/515009a. PMID 25373639.
- ^ Smith, Noah. "Why 'Statistical Significance' Is Often Insignificant". Bloomberg. Retrieved 7 November 2017.
- ^ Gary Marcus (May 1, 2013). "The Crisis in Social Psychology That Isn't". The New Yorker.
- ^ Jonah Lehrer (December 13, 2010). "The Truth Wears Off". The New Yorker.
- ^ Staddon, John (2017) Scientific Method: How science works, fails to work or pretends to work. Taylor and Francis.
- ^ Yeung, Andy W. K. (2017). "Do Neuroscience Journals Accept Replications? A Survey of Literature". Frontiers in Human Neuroscience. 11: 468. doi:10.3389/fnhum.2017.00468. ISSN 1662-5161. PMC 5611708. PMID 28979201.
{{cite journal}}
: CS1 maint: unflagged free DOI (link) - ^ Martin, G. N.; Clarke, Richard M. (2017). "Are Psychology Journals Anti-replication? A Snapshot of Editorial Practices". Frontiers in Psychology. 8: 523. doi:10.3389/fpsyg.2017.00523. ISSN 1664-1078. PMC 5387793. PMID 28443044.
{{cite journal}}
: CS1 maint: unflagged free DOI (link) - ^ Binswanger, Mathias (2015). "How Nonsense Became Excellence: Forcing Professors to Publish". In Welpe, Isabell M.; Wollersheim, Jutta; Ringelhan, Stefanie; Osterloh, Margit (eds.). Incentives and Performance. Springer International Publishing. pp. 19–32. doi:10.1007/978-3-319-09785-5_2. ISBN 9783319097855.
{{cite book}}
:|work=
ignored (help) - ^ Edwards, Marc A.; Roy, Siddhartha (2016-09-22). "Academic Research in the 21st Century: Maintaining Scientific Integrity in a Climate of Perverse Incentives and Hypercompetition". Environmental Engineering Science. 34 (1): 51–61. doi:10.1089/ees.2016.0223. PMC 5206685. PMID 28115824.
- ^ Brookshire, Bethany (21 October 2016). "Blame bad incentives for bad science". Science News. Retrieved 11 July 2019.
- ^ Smaldino, Paul E.; McElreath, Richard (2016). "The natural selection of bad science". Royal Society Open Science. 3 (9): 160384. arXiv:1605.09511. Bibcode:2016RSOS....360384S. doi:10.1098/rsos.160384. PMC 5043322. PMID 27703703.
- ^ "Registered Replication Reports". Association for Psychological Science. Retrieved 2015-11-13.
- ^ Chambers, Chris (2014-05-20). "Psychology's 'registration revolution'". the Guardian. Retrieved 2015-11-13.
- ^ Simera, I; Moher, D; Hirst, A; Hoey, J; Schulz, KF; Altman, DG (2010). "Transparent and accurate reporting increases reliability, utility, and impact of your research: reporting guidelines and the EQUATOR Network". BMC Medicine. 8: 24. doi:10.1186/1741-7015-8-24. PMC 2874506. PMID 20420659.
{{cite journal}}
: CS1 maint: unflagged free DOI (link) - ^ Simera, I.; Moher, D.; Hoey, J.; Schulz, K. F.; Altman, D. G. (2010). "A catalogue of reporting guidelines for health research". European Journal of Clinical Investigation. 40 (1): 35–53. doi:10.1111/j.1365-2362.2009.02234.x. PMID 20055895.
- ^ Simera, I; Altman, DG (October 2009). "Writing a research article that is "fit for purpose": EQUATOR Network and reporting guidelines". Evidence-Based Medicine. 14 (5): 132–4. doi:10.1136/ebm.14.5.132. PMID 19794009. S2CID 36739841.
- ^ Ioannidis, JPA (2016). "Why Most Clinical Research Is Not Useful". PLOS Med. 13 (6): e1002049. doi:10.1371/journal.pmed.1002049. PMC 4915619. PMID 27328301.
{{cite journal}}
: CS1 maint: unflagged free DOI (link) - ^ Ioannidis JA (13 July 2005). "Contradicted and initially stronger effects in highly cited clinical research". JAMA. 294 (2): 218–228. doi:10.1001/jama.294.2.218. PMID 16014596.
- ^ Chalmers, Iain; Glasziou, Paul (2009). "Avoidable waste in the production and reporting of research evidence". The Lancet. 374 (9683): 86–89. doi:10.1016/S0140-6736(09)60329-9. ISSN 0140-6736. PMID 19525005. S2CID 11797088.
- ^ June 24, Jeremy Hsu; ET, Jeremy Hsu. "Dark Side of Medical Research: Widespread Bias and Omissions". Live Science. Retrieved 24 May 2019.
{{cite web}}
: CS1 maint: numeric names: authors list (link) - ^ "Confronting conflict of interest". Nature Medicine. 24 (11): 1629. November 2018. doi:10.1038/s41591-018-0256-7. ISSN 1546-170X. PMID 30401866.
- ^ Haque, Waqas; Minhajuddin, Abu; Gupta, Arjun; Agrawal, Deepak (2018). "Conflicts of interest of editors of medical journals". PLOS ONE. 13 (5): e0197141. Bibcode:2018PLoSO..1397141H. doi:10.1371/journal.pone.0197141. ISSN 1932-6203. PMC 5959187. PMID 29775468.
{{cite journal}}
: CS1 maint: unflagged free DOI (link) - ^ Moncrieff, J (March 2002). "The antidepressant debate". The British Journal of Psychiatry. 180 (3): 193–4. doi:10.1192/bjp.180.3.193. ISSN 0007-1250. PMID 11872507. Retrieved 22 May 2019.
- ^ Bello, S; Moustgaard, H; Hróbjartsson, A (October 2014). "The risk of unblinding was infrequently and incompletely reported in 300 randomized clinical trial publications". Journal of Clinical Epidemiology. 67 (10): 1059–69. doi:10.1016/j.jclinepi.2014.05.007. ISSN 1878-5921. PMID 24973822.
- ^ Tuleu, Catherine; Legay, Helene; Orlu-Gul, Mine; Wan, Mandy (1 September 2013). "Blinding in pharmacological trials: the devil is in the details". Archives of Disease in Childhood. 98 (9): 656–659. doi:10.1136/archdischild-2013-304037. ISSN 0003-9888. PMC 3833301. PMID 23898156. Retrieved 8 May 2019.
- ^ Kirsch, I (2014). "Antidepressants and the Placebo Effect". Zeitschrift für Psychologie. 222 (3): 128–134. doi:10.1027/2151-2604/a000176. ISSN 2190-8370. PMC 4172306. PMID 25279271.
- ^ Ioannidis, John PA (27 May 2008). "Effectiveness of antidepressants: an evidence myth constructed from a thousand randomized trials?". Philosophy, Ethics, and Humanities in Medicine. 3: 14. doi:10.1186/1747-5341-3-14. ISSN 1747-5341. PMC 2412901. PMID 18505564.
{{cite journal}}
: CS1 maint: unflagged free DOI (link) - ^ Moher, David; Altman, Douglas G.; Schulz, Kenneth F. (24 March 2010). "CONSORT 2010 Statement: updated guidelines for reporting parallel group randomised trials". BMJ. 340: c332. doi:10.1136/bmj.c332. ISSN 0959-8138. PMC 2844940. PMID 20332509. Retrieved 24 April 2019.
- ^ Clarke, Michael; Chalmers, Iain (1998). "Discussion Sections in Reports of Controlled Trials Published in General Medical Journals". JAMA. 280 (3): 280–2. doi:10.1001/jama.280.3.280. PMID 9676682.
- ^ a b Lau, Joseph; Antman, Elliott M; Jimenez-Silva, Jeanette; Kupelnick, Bruce; Mosteller, Frederick; Chalmers, Thomas C (1992). "Cumulative Meta-Analysis of Therapeutic Trials for Myocardial Infarction". New England Journal of Medicine. 327 (4): 248–54. doi:10.1056/NEJM199207233270406. PMID 1614465.
- ^ Fergusson, Dean; Glass, Kathleen Cranley; Hutton, Brian; Shapiro, Stan (2016). "Randomized controlled trials of aprotinin in cardiac surgery: Could clinical equipoise have stopped the bleeding?". Clinical Trials: Journal of the Society for Clinical Trials. 2 (3): 218–29, discussion 229–32. doi:10.1191/1740774505cn085oa. PMID 16279145. S2CID 31375469.
- ^ Clarke, Mike; Brice, Anne; Chalmers, Iain (2014). "Accumulating Research: A Systematic Account of How Cumulative Meta-Analyses Would Have Provided Knowledge, Improved Health, Reduced Harm and Saved Resources". PLOS ONE. 9 (7): e102670. Bibcode:2014PLoSO...9j2670C. doi:10.1371/journal.pone.0102670. PMC 4113310. PMID 25068257.
{{cite journal}}
: CS1 maint: unflagged free DOI (link) - ^ a b Robinson, Karen A; Goodman, Steven N (2011). "A Systematic Examination of the Citation of Prior Research in Reports of Randomized, Controlled Trials". Annals of Internal Medicine. 154 (1): 50–5. doi:10.7326/0003-4819-154-1-201101040-00007. PMID 21200038. S2CID 207536137.
- ^ Epstein, David. "When Evidence Says No, but Doctors Say Yes - The Atlantic". Pocket. Retrieved 10 April 2020.
- ^ Tatsioni, A; Bonitsis, NG; Ioannidis, JP (5 December 2007). "Persistence of contradicted claims in the literature". JAMA. 298 (21): 2517–26. doi:10.1001/jama.298.21.2517. ISSN 1538-3598. PMID 18056905.
- ^ Franco, Annie; Malhotra, Neil; Simonovits, Gabor (1 January 2016). "Underreporting in Psychology Experiments: Evidence From a Study Registry". Social Psychological and Personality Science. 7 (1): 8–12. doi:10.1177/1948550615598377. ISSN 1948-5506. S2CID 143182733.
- ^ Munafò, Marcus (29 March 2017). "Metascience: Reproducibility blues". Nature. 543 (7647): 619–620. Bibcode:2017Natur.543..619M. doi:10.1038/543619a. ISSN 1476-4687.
- ^ Stokstad, Erik (20 September 2018). "This research group seeks to expose weaknesses in science—and they'll step on some toes if they have to". Science. doi:10.1126/science.aav4784.
- ^ Open Science Collaboration (2015). "Estimating the reproducibility of psychological science" (PDF). Science. 349 (6251): aac4716. doi:10.1126/science.aac4716. hdl:10722/230596. PMID 26315443. S2CID 218065162.
- ^ a b Allen, Christopher P G.; Mehler, David Marc Anton. "Open Science challenges, benefits and tips in early career and beyond". doi:10.31234/osf.io/3czyt.
{{cite journal}}
: Cite journal requires|journal=
(help) - ^ Simmons, Joseph P.; Nelson, Leif D.; Simonsohn, Uri (2011). "False-Positive Psychology". Psychological Science. 22 (11): 1359–1366. doi:10.1177/0956797611417632. PMID 22006061.
- ^ Stroebe, Wolfgang; Strack, Fritz (2014). "The Alleged Crisis and the Illusion of Exact Replication" (PDF). Perspectives on Psychological Science. 9 (1): 59–71. doi:10.1177/1745691613514450. PMID 26173241. S2CID 31938129.
- ^ Aschwanden, Christie (6 December 2018). "Psychology's Replication Crisis Has Made The Field Better". FiveThirtyEight. Retrieved 19 December 2018.
- ^ Cohen, Jacob (1994). "The earth is round (p < .05)". American Psychologist. 49 (12): 997–1003. doi:10.1037/0003-066X.49.12.997. S2CID 380942.
- ^ MacCoun, Robert; Perlmutter, Saul (8 October 2015). "Blind analysis: Hide results to seek the truth". Nature. 526 (7572): 187–189. Bibcode:2015Natur.526..187M. doi:10.1038/526187a. PMID 26450040.
- ^ Galipeau, James; Moher, David; Campbell, Craig; Hendry, Paul; Cameron, D. William; Palepu, Anita; Hébert, Paul C. (March 2015). "A systematic review highlights a knowledge gap regarding the effectiveness of health-related training programs in journalology". Journal of Clinical Epidemiology. 68 (3): 257–265. doi:10.1016/j.jclinepi.2014.09.024. PMID 25510373.
- ^ Wilson, Mitch; Moher, David (March 2019). "The Changing Landscape of Journalology in Medicine". Seminars in Nuclear Medicine. 49 (2): 105–114. doi:10.1053/j.semnuclmed.2018.11.009. hdl:10393/38493. PMID 30819390.
- ^ a b c Couzin-Frankel, Jennifer (18 September 2018). "'Journalologists' use scientific methods to study academic publishing. Is their work improving science?". Science. doi:10.1126/science.aav4758.
- ^ Leydesdorff, L. and Milojevic, S., "Scientometrics" arXiv:1208.4566 (2013), forthcoming in: Lynch, M. (editor), International Encyclopedia of Social and Behavioral Sciences subsection 85030. (2015)
- ^ Markowitz, David M.; Hancock, Jeffrey T. (2016). "Linguistic obfuscation in fraudulent science". Journal of Language and Social Psychology. 35 (4): 435–445. doi:10.1177/0261927X15614605. S2CID 146174471.
- ^ Ding, Y. (2010). "Applying weighted PageRank to author citation networks". Journal of the American Society for Information Science and Technology. 62 (2): 236–245. arXiv:1102.1760. doi:10.1002/asi.21452. S2CID 3752804.
Further reading
- Lydia Denworth, "A Significant Problem: Standard scientific methods are under fire. Will anything change?", Scientific American, vol. 321, no. 4 (October 2019), pp. 62–67. "The use of p values for nearly a century [since 1925] to determine statistical significance of experimental results has contributed to an illusion of certainty and [to] reproducibility crises in many scientific fields. There is growing determination to reform statistical analysis... Some [researchers] suggest changing statistical methods, whereas others would do away with a threshold for defining "significant" results." (p. 63.)
- Harris, Richard (2017). Rigor Mortis: How Sloppy Science Creates Worthless Cures, Crushes Hopes, and Wastes Billions. Basic Books. ISBN 9780465097913.
External links
Journals
- Minerva: A Journal of Science, Learning and Policy
- Research Integrity and Peer Review
- Research Policy
- Science and Public Policy
Conferences