Evidence-based practice

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

An evidence-based practice (EBP) is any practice that relies on scientific evidence for guidance and decision-making. Practices that are not evidence-based may rely on tradition, intuition, or other unproven methods. Evidence-based practices have been gaining ground since the formal introduction of evidence-based medicine in 1992, and have spread to the allied health professions, education, management, law, public policy, and other fields.[1] In light of studies showing problems in scientific research (such as the replication crisis), there has been a movement to apply evidence-based practices in scientific research itself. Research into the evidence-based practice of science is called metascience.

The movement towards evidence-based practices attempts to encourage, and in some instances to force, professionals and other decision-makers to pay more attention to evidence to inform their decision-making. The goal of evidence-based practice is eliminate unsound or outdated practices in favor of more effective ones by shifting the basis for decision making from tradition, intuition, and unsystematic experience to firmly grounded scientific research.[2]

History[edit]

For most of history, professions have based their practices on expertise derived from experience passed down in the form of tradition. Many of these practices have not been justified by evidence, which has sometimes enabled quackery and poor performance. Even when overt quackery is not present, quality and efficiency of tradition-based practices may not be optimal. As the scientific method has become increasingly recognized as a sound means to evaluate practices, evidence-based practices have become increasingly adopted.

One of the earliest proponents of EBP was Archie Cochrane, an epidemiologist who authored the book Effectiveness and Efficiency: Random Reflections on Health Services in 1972. Cochrane's book argued for the importance of properly testing health care strategies, and was foundational to the evidence-based practice of medicine.[3] Cochrane suggested that because resources would always be limited, they should be used to provide forms of health care which had been shown in properly designed evaluations to be effective. Cochrane maintained that the most reliable evidence was that which came from randomised controlled trials.[4]

The term "evidence-based medicine" was introduced in 1992. This marked the first evidence-based practice to be formally established. Some early experiments in evidence-based medicine involved testing primitive medical techniques such a bloodletting, and studying the effectiveness of modern and accepted treatments. There has been a push for evidence-based practices in medicine by insurance providers, which have sometimes refused coverage of practices lacking in systematic evidence of usefulness. It is now expected by most clients that medical professionals should make decisions based on evidence, and stay informed about the most up-to-date information. Since the widespread adoption of evidence-based practices in medicine, the use of evidence-based practices has rapidly spread to other fields.[5]

More recently, there has been a push for evidence-based education. The use of evidence-based learning techniques such as spaced repetition can improve students' rate of learning. Some commentators[who?] have suggested that the putative lack of any conspicuous progress in the field of education is attributable to practice resting in the unconnected and noncumulative experience of thousands of individual teachers, each re-inventing the wheel and failing to learn from hard scientific evidence about 'what works'. Opponents of this view argue that hard scientific evidence is a misnomer in education; knowing that a drug works (in medicine) is entirely different from knowing that a teaching method works, for the latter will depend on a host of factors, not least those to do with the style, personality and beliefs of the teacher and the needs of the particular children (Hammersley 2013). Some opponents of EBP in education suggest that teachers need to develop their own personal practice, dependent on personal knowledge garnered through their own experience. Others argue that this must be combined with research evidence, but without the latter being treated as a privileged source.[6]

Vs. tradition[edit]

Evidence-based practice is a philosophical approach that is in opposition to tradition. Some degree of reliance on "the way it was always done" can be found in almost every profession, even when those practices are contradicted by new and better information.[7]

Some critics argue that since research is conducted on a population level, results may not generalise to each individual within the population. Therefore, evidence-based practices may fail to provide the best solution to each individual, and traditional practices may better accommodate individual differences. In response, researchers have made an effort to test whether particular practices work better for different subcultures, personality types etc.[8] Some authors have redefined EBP to include practice that incorporates common wisdom, tradition, and personal values in alongside practices based on evidence.[7]

Evaluating evidence[edit]

The core activities at the root of evidence-based practice can be identified as:

  • a questioning approach to practice leading to scientific experimentation
  • meticulous observation, enumeration, and analysis replacing anecdotal case description
  • recording and cataloguing the evidence for systematic retrieval.[9]

Evaluation of research quality can be a difficult task requiring meticulous reading of research reports and background information. It may not be appropriate simply to accept the conclusion reported by the researchers; for example, in one investigation of outcome studies, 70% were found to have stated conclusions unjustified by their research design.[10]

Meta-analyses and systematic research syntheses[edit]

When there are many small or weak studies of an intervention, a statistical meta-analysis can be used to co-ordinate the studies' results and to draw a stronger conclusion about the outcome of the treatment. This can be an important contribution to the establishment of a foundation of evidence about an intervention.

In other situations, facts about a group of study outcomes may be gathered and discussed in the form of a systematic research synthesis (SRS).[11] A SRS can be more or less useful, depending on the evaluation protocol chosen, and errors in choice or use of a protocol have led to fallacious reports.[12] The meaningfulness of a SRS report on an intervention is limited by the quality of the research under consideration, but SRS reports can be helpful to readers seeking to understand EBP-related choices.

Miller et al. provide an excellent example and explication of the use of meta-analysis examining treatment outcome research, incorporating the principles of rigorous empirical research from the strong end of the continuum of levels of evidence.[13] This textbook also explicates how the research included was selected (e.g. controlled study looking at two different approaches, appearing in a peer-reviewed journal, sufficient power to find significant differences if they occurred) and how each study was checked for validity (how was the outcome measured?) and reliability (did the research do what they said they did?), etc. to create a Cumulative Evidence Score weighted by the quality of the study (and not by the outcome) such that better studies with "stronger designs" and better methodological quality ratings carry more weight than weaker studies. The results lead to a rank ordering of the 48 treatment modalities included and provide a basis for selecting supportable treatment approaches beyond anecdotes, traditions and lore.

Hierarchy of evidence and evaluation of research quality[edit]

A hierarchy of evidence is a ranking system used to describe the strength of results obtained from scientific research. Evidence hierarchies are often applied in evidence-based practices to better better understand the quality of evidence backing a claim. Evidence hierarchies are integral to evidence-based medicine.

There is broad agreement on the relative strength of the principal types of studies, but there is no single, universally-accepted hierarchy of evidence. More than 80 different hierarchies have been proposed for assessing medical evidence.[14] The design of the study (such as a case report or a blinded experiment) and the endpoints measured (such as survival or quality of life) affect the strength of the evidence. Typically, systematic reviews and meta-analysies rank above randomized controlled trials, while randomized controlled trials rank above observational studies, and expert opinion and anecdotal experience are ranked at the bottom.

Applications of evidence-based practice[edit]

Psychology[edit]

Evidence-based practice of psychology requires practitioners to follow psychological approaches and techniques that are based on a particular kind of research evidence (Sackett, Straus, Richardson, Rosenberg, & Haynes, 2000).

Criteria for empirically supported therapies have been defined by Chambless and Hollon (1998). Accordingly, a therapy is considered "efficacious and specific" if there is evidence from at least two settings that it is superior to a pill or psychological placebo or another bona fide treatment. If there is evidence from two or more settings that the therapy is superior to no treatment it is considered "efficacious". If there is support from one or more studies from just a single setting, the therapy is considered possibly efficacious pending replication. Following these guidelines, cognitive behavior therapy (CBT) stands out as having the most empirical support for a wide range of symptoms in adults, adolescents, and children.[15] The term "evidence-based practice" is not always used in such a rigorous fashion, and many psychologists claim to follow "evidence-based approaches" even when the methods they use do not meet established criteria for "efficacy" (Berke, Rozell, Hogan, Norcross, and Karpiak, 2011). In reality, not all mental health practitioners receive training in evidence-based approaches, and members of the public are often unaware that evidence-based practices exist. However, there is no guarantee that mental health practitioners trained in "evidence-based approaches" are more effective or safer than those trained in other modalities. Consequently, patients do not always receive the most effective, safe, and cost-effective treatments available. To improve dissemination of evidence-based practices, the Association for Behavioral and Cognitive Therapies (ABCT) and the Society of Clinical Child and Adolescent Psychology (SCCAP, Division 53 of the American Psychological Association) maintain updated information on their websites on evidence-based practices in psychology for practitioners and the general public. "Evidence-based" is a technical term, and there are many treatments with decades of evidence supporting their efficacy that are not considered "evidence-based."

Some discussions of EBP in clinical psychology settings distinguish the latter from "empirically supported treatments" (ESTs). ESTs have been defined as "clearly specified psychological treatments shown to be efficacious in controlled research with a delineated population."[16] Those who distinguish EBP from ESTs highlight the greater emphasis in EBP on integrating the "three legs" of research evidence, clinician expertise, and client values. From the latter perspective, ESTs are understood to place primary or exclusive emphasis on the first "leg," namely, research evidence.[17][18]

Social policy[edit]

There are increasing demands for the whole range of social policy and other decisions and programs run by government and the NGO sector to be based on sound evidence as to their effectiveness. This has seen an increased emphasis on the use of a wide range of Evaluation approaches directed at obtaining evidence about social programs of all types. A research collaboration called the Campbell Collaboration has been set up in the social policy area to provide evidence for evidence-based social policy decision-making. This collaboration follows the approach pioneered by the Cochrane Collaboration in the health sciences.[19] Using an evidence-based approach to social policy has a number of advantages because it has the potential to decrease the tendency to run programs which are socially acceptable (e.g. drug education in schools) but which often prove to be ineffective when evaluated.[20]

Medicine[edit]

Evidence-based medicine (EBM) is an approach to medical practice intended to optimize decision-making by emphasizing the use of evidence from well-designed and well-conducted research. Although all medicine based on science has some degree of empirical support, EBM goes further, classifying evidence by its epistemologic strength and requiring that only the strongest types (coming from meta-analyses, systematic reviews, and randomized controlled trials) can yield strong recommendations; weaker types (such as from case-control studies) can yield only weak recommendations. The term was originally used to describe an approach to teaching the practice of medicine and improving decisions by individual physicians about individual patients.[21] Use of the term rapidly expanded to include a previously described approach that emphasized the use of evidence in the design of guidelines and policies that apply to groups of patients and populations ("evidence-based practice policies").[22]

Whether applied to medical education, decisions about individuals, guidelines and policies applied to populations, or administration of health services in general, evidence-based medicine advocates that to the greatest extent possible, decisions and policies should be based on evidence, not just the beliefs of practitioners, experts, or administrators. It thus tries to assure that a clinician's opinion, which may be limited by knowledge gaps or biases, is supplemented with all available knowledge from the scientific literature so that best practice can be determined and applied. It promotes the use of formal, explicit methods to analyze evidence and makes it available to decision makers. It promotes programs to teach the methods to medical students, practitioners, and policymakers.

A process has been specified that provides a standardised route for those seeking to produce evidence of the effectiveness of interventions.[23] Originally developed to establish processes for the production of evidence in the housing sector, the standard is general in nature and is applicable across a variety of practice areas and potential outcomes of interest.

Scientific research[edit]

As with other fields, many practices in scientific research are rooted in tradition rather than evidence, and are unproven. John Ioannidis 2005 paper "Why most published research findings are false"[24] found evidence that these poor practices regularly result in false findings and enormous waste. The paper was the most downloaded in the Public Library of Science, and has the highest number of Mendeley readers across all science."[25] There has since been a movement for the use of evidence-based practice in conducting scientific research in attempt to address the replication crisis and other major issues affecting scientific research.[26] The application of evidence-based practices to research itself is called metascience.

Metascience seeks to increase the quality of scientific research while reducing waste. It is also known as "research on research" and "the science of science", as it uses research methods to study how research is done and where improvements can be made. The five main areas of research in metascience are methodology, reporting, reproducibility, evaluation, and incentives.[27] Metascience has produced a number of reforms in science such as the use of study pre-registration and the implementation of reporting guidelines with the goal of bettering scientific research practices.[28]

See also[edit]

Notes[edit]

  1. ^ Li, Rita Yi Man; Chau, Kwong Wing; Zeng, Frankie Fanjie (2019). "Ranking of Risks for Existing and New Building Works". Sustainability. 11 (10): 2863. doi:10.3390/su11102863.
  2. ^ Leach, Matthew J. (2006). "Evidence-based practice: A framework for clinical practice and research design". International Journal of Nursing Practice. 12 (5): 248–251. doi:10.1111/j.1440-172X.2006.00587.x. ISSN 1440-172X. PMID 16942511.
  3. ^ Cochrane, A.L. (1972). Effectiveness and Efficiency. Random Reflections on Health Services. London: Nuffield Provincial Hospitals Trust. ISBN 978-0900574177. OCLC 741462.
  4. ^ Cochrane Collaboration (2003) http://www.cochrane.org/about-us/history/archie-cochrane
  5. ^ "A Brief History of Evidence-based Practice | Evidence Based Practice in Optometry EBP Australia UNSW". www.eboptometry.com. Retrieved 24 June 2019.
  6. ^ Thomas, G. and Pring, R. (Eds.) (2004). Evidence-based Practice in Education. Open University Press.
  7. ^ a b Buysse, V.; Wesley, P.W. (2006). "Evidence-based practice: How did it emerge and what does it really mean for the early childhood field?". Zero to Three. 27 (2): 50–55. ISSN 0736-8038.
  8. ^ de Groot, M.; van der Wouden, J. M.; van Hell, E. A.; Nieweg, M. B. (31 July 2013). "Evidence-based practice for individuals or groups: let's make a difference". Perspectives on Medical Education. 2 (4): 216–221. doi:10.1007/s40037-013-0071-2. PMC 3792230. PMID 24101580.
  9. ^ Peile, E. (2004). "Reflections from medical practice: balancing evidence-based practice with practice based evidence". In Thomas, G.; Pring, R. (eds.). Evidence-based Practice in Education. Open University Press. pp. 102–16. ISBN 978-0335213344.
  10. ^ Rubin, A.; Parrish, D. (2007). "Problematic phrases in the conclusions of published outcome studies". Research on Social Work Practice. 17 (3): 334–47. doi:10.1177/1049731506293726.
  11. ^ Cooper, H. (2003). "Editorial". Psychological Bulletin. 129 (1): 3–9. doi:10.1037/0033-2909.129.1.3. hdl:10161/14947.
  12. ^ Pignotti, M.; Mercer, J. (2007). "Holding Therapy and Dyadic Developmental Psychotherapy are not supported and acceptable social work interventions". Research on Social Work Practice. 17 (4): 513–19. doi:10.1177/1049731506297046.
  13. ^ Miller, W. R.; Wilbourne, P.L.; Hettema, J.E. (2003). "Ch. 2: What Works? A summary of alcohol treatment outcome research". In Hester, R.; Miller, W.R. (eds.). Handbook of alcoholism treatment approaches: Effective alternatives (3rd ed.). Allyn & Bacon. pp. 13–63. ISBN 978-0205360642. Summary table
  14. ^ Siegfried T (2017-11-13). "Philosophical critique exposes flaws in medical evidence hierarchies". Science News. Retrieved 2018-05-16.
  15. ^ Lambert MJ, Bergin AE, Garfield SL (2004). "Introduction and Historical Overview". In Lambert MJ (ed.). Bergin and Garfield's Handbook of Psychotherapy and Behavior Change (5th ed.). New York: John Wiley & Sons. pp. 3–15. ISBN 978-0-471-37755-9.
  16. ^ Chambless DL, Hollon SD (February 1998). "Defining empirically supported therapies". J Consult Clin Psychol. 66 (1): 7–18. CiteSeerX 10.1.1.586.4638. doi:10.1037/0022-006X.66.1.7. PMID 9489259.
  17. ^ APA Presidential Task Force on Evidence-Based Practice (May–June 2006). "Evidence-based practice in psychology" (PDF). American Psychologist. 61 (4): 271–85. doi:10.1037/0003-066x.61.4.271. PMID 16719673.
  18. ^ La Roche, M.L., and Christopher, M.S. (2009). "Changing paradigms from empirically supported treatment to evidence-based practice: A cultural perspective". Professional Psychology: Research and Practice. 40 (4): 396–402. doi:10.1037/a0015240.CS1 maint: Uses authors parameter (link)
  19. ^ "Welcome".
  20. ^ Raines, J.C. (2008). Evidence Based Practice in School Mental Health. Oxford University Press. ISBN 978-0-19-971072-0.
  21. ^ Evidence-Based Medicine Working Group (November 1992). "Evidence-based medicine. A new approach to teaching the practice of medicine" (PDF). JAMA. 268 (17): 2420–25. CiteSeerX 10.1.1.684.3783. doi:10.1001/jama.268.17.2420. PMID 1404801.
  22. ^ Eddy DM (1990). "Practice Policies – Where Do They Come from?". Journal of the American Medical Association. 263 (9): 1265, 1269, 1272, 1275. doi:10.1001/jama.263.9.1265. PMID 2304243.
  23. ^ Vine, Jim (2016), Standard for Producing Evidence – Effectiveness of Interventions – Part 1: Specification (StEv2-1), HACT, ISBN 978-1-911056-01-0, Standards of Evidence
  24. ^ Ioannidis, John P.A. (August 1, 2005). "Why Most Published Research Findings Are False". PLoS Medicine. 2 (8): 124. doi:10.1371/journal.pmed.0020124. ISSN 1549-1277. PMC 1182327. PMID 16060722.
  25. ^ Medicine - Stanford Prevention Research Center. John P.A. Ioannidis
  26. ^ Rathi, Akshat. "Most science research findings are false. Here's how we can change that". Quartz. Retrieved 13 June 2019.
  27. ^ Ioannidis, John P. A.; Fanelli, Daniele; Dunne, Debbie Drake; Goodman, Steven N. (2 October 2015). "Meta-research: Evaluation and Improvement of Research Methods and Practices". PLoS Biology. 13 (10): e1002264. doi:10.1371/journal.pbio.1002264. ISSN 1544-9173. PMC 4592065. PMID 26431313.
  28. ^ Ioannidis, John P. A.; Fanelli, Daniele; Dunne, Debbie Drake; Goodman, Steven N. (2015-10-02). "Meta-research: Evaluation and Improvement of Research Methods and Practices". PLOS Biology. 13 (10): –1002264. doi:10.1371/journal.pbio.1002264. ISSN 1545-7885. PMC 4592065. PMID 26431313.

References[edit]

External links[edit]